January 22, 2024 at 6:48 pm | Updated January 23, 2024 at 5:41 pm | 26 min read
Request a Quote | Schedule a Consultation
Welcome to the fifth and final part of our webinar series on spectroscopy in agriculture!
This 5-part series will cover the A-Z of internal quality assessment, spectroscopy, chemometrics, model building, model validation, and optimization in a commercial agriculture setting. In this final segment, our Director of Applied Science, Galen George, will delve into the intricacies of calibration, model maintenance, and optimization. This session, perfect for newcomers and seasoned professionals in the field, is an enlightening experience.
Watch now to explore the following topics:
- Calibration Transfer
- Model Maintenance
- Optimization
A live Q&A session was hosted following the training.
Request a Quote:
Presentation Slides:
Video Transcription
So today we are wrapping up our series on key metrics.
So we’ve had four other previous parts to this that have all discussed the varying components that go into building a model or a calibration for a near-infrared spectroscopy based device.
And there’s a lot of components to it.
If this is the first webinar of the series that you’re catching, I highly recommend that you go back and you watch the previous parts of this series.
They are all on our YouTube channel and you can easily access those through our website or directly through YouTube.
So just introduce myself if you aren’t familiar already.
My name is Galen.
I am the director of Applied Science at Felix Instruments and I’ve been with the company now for about five years.
My background is mostly in food science and biochemistry.
I’ve worked a lot in the quality and safety sectors, doing quality and safety testing for food, agriculture and cannabis industries and commodities.
So actually before I get into the actual bulk of this presentation, I did want to kind of talk about some housekeeping stuff first.
If you guys have participated in these before, then you already know this, but these only utilize the chat if there are any kind of technical issues that we’re encountering.
If my internet cuts out, my audio cuts out, or if if you’re having trouble, then please feel free to use the chat function for that for any questions that are actually related to the content of this webinar, please use the Q&A function in Zoom after this webinar concludes, I will go into that Q&A function and actually be able to see the list of questions and we’ll be able to address those.
If you don’t put your question in that Q&A function, then I will not see it and I will not be able to address that for any other chat amongst the participants.
Or if you need to get in touch with Susie, who is moderating this webinar.
She’s our distributor manager.
If you need to, you know, use the chat function for that, that’s fine.
Also, if there are any any relevant links that are referenced in this presentation, those will be posted in the chat for you to click on as well, since you won’t be able to directly interact with the links that are in the presentation.
So without any further ado, let’s go ahead and get started.
And I really wanted to, since this is the conclusion, this is definitely a much shorter, I guess, component of the series than any of the other sections work.
So I thought it was best to maybe just give an overview of what we’ve already talked about, just have kind of a broad look at all the components that go into this model building process.
And so in general, the process involves initial sampling and we talked about how important it is to have a robust sample that is representative of all of the variability that you might encounter in your real life.
Use case of the instrument.
We then had a section that talked about the spectral collection and the analytical testing, how it’s important to standardize those processes so that you’re getting consistent data both from your spectral instrument, you’re an air based instrument, but also from your laboratory testing, because the laboratory testing is or the analytical testing is what is used as the reference to actually build these these databases that that then are used to create the models.
And then in our most recent section, we’ve talked about their multivariate data analysis, the actual chemo metrics that go into, into the building of the model.
So once you’ve collected all of your all of your analytical data, your spectral data, all the mathematics that go into how a model is created.
And then we also talked about in the most recent section, we talked about the validation of the model and how that’s just as important.
If not more important than the simple actual process of building the model.
And then model deployment is the last stage where you’re actually using it in your real life case, real life use case scenario.
And the model deployment part is kind of what we’re kind of discussing today.
We’ll be getting into a little bit about some challenges that that that you’ll face when implementing this kind of a technology into your organization.
And also just talk about what is required to ensure the longevity and robustness of your models.
Now, since we talked about validation most recently, I just want to give another overview of how that process works.
We discussed that internal model validation is the first step.
So once you build your model, you need to validate that internally.
We talked about a couple of methods that are used to do internal model validation.
So those methods are like the holdout method where you’re essentially just withholding a subset of your data and testing it against your training data, the rest of the data in your model.
And then there’s also something like the K Folds Cross validation, which is essentially the same as a holdout validation, but done repeatedly with different subsections of your samples, set your test or your training set, and then that is repeated multiple iterations and then from that you get an average of your of your error in your model.
Then after that, we’re evaluating our model performance.
Once we’ve done our internal validation to make sure that the model’s performing as we expected to, After that, you get a need to externally validate or perform an independent validation on a totally new set of fruit or whatever commodity you’re testing, and that will actually give you the kind of expected accuracy of how you can expect this to be performing this model that you’ve built.
B Performing in the real, real life scenario.
And so after you’ve evaluated that model performance and you’re happy with it, then then you can go ahead and move forward with model deployment.
And so today we’re going to be talking about some things that are considerations for once you’ve kind of completed this, this initial big first step.
So you’ve gone through and you have essentially, you know, you’ve collected all your data, you build the model, you validated it, and now you’re ready to start using it.
But there’s some things you still need to consider moving forward to ensure that this is going to this technology, that this model is going to work for you and your organization moving into the future.
And so what’s next after you’ve built and validated?
Well, unfortunately, the work isn’t completely over.
You still have some things to think about.
The first thing we’re going to talk about today is calibration transfer.
Now there is a problem that’s been known about since the development of this technology, where we can’t simply take a model that we’ve built with one instrument and put it on another instrument and expect it to work the exact same way.
Even if those two instruments are technically identical, like they have the same hardware, the same spectrometer, there is still going to be a mismatch and predictions and and sometimes in Spectra.
And so what I want to talk about is, is ways we can address this problem in ways that other researchers and people in the past are addressing this problem and how we can minimize the effects of the calibration transfer issue.
The other steps in this are going to be model optimizations.
So actually going through and assessing what you’re organization needs and what the actual use case of this is going to be and and implementing any changes you might need to in order to optimize how that model is working.
And then we have model maintenance.
Now this is arguably one of the more important steps in all of this, because if you just build a model with one season’s worth of data, let’s use kiwifruit as an example and then you expect it to work every year.
You’re on it, you know, year after year, season after season, in the same way that it worked for that first season.
When you built the model, then you’re going to be unfortunately disappointed because a model requires you to continuously maintain it and updated with new data.
That data needs to be relevant to and encompass any of the new variabilities that are going to be present.
So we talked about earlier in one of the previous sections that seasonality is an important variable when we’re discussing these ADR based models for agricultural commodities.
And so if you aren’t updating your model every season, then you aren’t going to be able to improve the robustness of that model and you’ll see performance decrease on average in every season that you’re using.
And beyond that initial first season that you used in your sample set when building the model.
And then the last step is something that, you know, this is a little more optional, but it’s definitely something that everyone should be aware of is just general exploration of staying updated on advances in human metrics and machine learning, you know, reading publications to determine if anything, if there’s anything new that has been discovered that can, you know, be implemented and actually help improve the performance of your of your models.
So let’s go ahead and first focus on calibration transfer.
And I wanted to first just kind of as I already I guess, briefly summarized, discuss what the problem is that we’re facing with calibration transfer.
So the problem is transfer of a model or a calibration containing a database of spectra for one spectrometer or an instrument to one or more other instruments results in a loss of model, predictive accuracy and precision.
So there’s two kind of cases for this, and we’re using the Felix Instruments at 750 ones, and I have seven fifties to two just to kind of explain these two cases that arise.
And these are things that you need to think about.
This comes into the model optimization part of things, but these are things you have to think about when you’re thinking about adopting this technology.
You need to think about, you know, is this going to be something that we’re only going to use one instrument for, or are we going to need multiple instruments?
Because even in case one where we’re taking a model that we built with an EF 751 avocado quality meter and then we’re just putting it on to three nil at 751 avocado quality meters, those three new meters that all have that same model are not all going to predict the same as with the same accuracy or precision as the original unit that was used to collect the data that is in the model.
Now, in the second case, there is an actual difference in spectrometers, so we are talking about an accident 50 that uses a slightly higher powered spectrometer and we’re taking a model that we built using a 750 and we’re trying to then give it to these three 751 mango quality meters.
And in this case, we’re not just overcoming minor differences in the spectrometer hardware, you know, from the manufacturer, we’re overcoming actual differences in spec.
And so they’re totally different spectrometers.
So in case one, you can expect that model transfer to go a little more smoothly and the accuracy and precision loss to be a little bit less in case number two, you’re going to see a much more significant difference in your accuracy and precision when you go to deploy these models on these ABS.
1150 ones, because the model was built using a completely different spectrometer.
So how can we overcome this, this this issue, this calibration transfer problem?
There’s multiple ways that we go about doing this.
And they encompass everything from hardware like instrument based methods to mathematical approaches.
And starting off the most traditional method that we have that was, you know, used to address this problem is a simple bias and slope adjustment.
So in the case of case one where we’re just transferring a model from one 751 to multiple other new 750 ones, it might be as simple as there’s just a slight bias in all of those and all the predictions and all we need to do is, is implement a small bias in the devices in order for them to line up with the original unit.
There’s also sometimes slope correction that is that is that needs to be done.
However, slope adjustment is something that always is going to increase your error.
So if you find that your initial model that you built when you go to deploy it on a new 751, let’s say in silver still in case one, then you notice that in the new instrument the slope is a little bit off, it’s maybe a little bit more steep if you go to kind of, you know, adjust the slope of that regression and have it actually tilt back so that it’s more lined up with with a, you know, a slope of one, which is the ideal slope.
Then what you’re going to find is that all your predictions will kind of spread out and they’ll kind of increase in error.
And this happens no matter which either.
Either way that you’re adjusting that slope.
And so that’s something to be aware of, is that you’re actually increasing the spread of prediction when you’re adjusting the slope.
And so this is the, you know, the most base basic form of approaching how to how to minimize the effects of calibration transferred.
The first step that that most people do, especially as manufacturers of the instruments do is more hardware based.
So we’re actually looking at instruments, different spectrometers, and we’re performing lots of tests where we’re testing standard targets and we’re looking at the spectra and we’re ensuring that the spectra are all aligning correctly.
And this can be done.
You know, this is typically done, you know, with, you know, in development or even in Q after development.
And this is also something that you do, especially us as a manufacturer.
This is something that we do before, you know, after we produce any new spectrometer, we always perform a test of some sort that on a standardized target to ensure that our spectra looks the same as it has for all the other previous units that that were built essentially.
Now, even though you can get, as you know, there’s always going to be some tolerance there, you’re never going to always have the exact exact same spectra, even when you do as many, you know, minor hardware tweaks as you can.
So even though this is a great first step, it’s not going to solve the problem completely.
The next most common way as a mathematical approach is to actually perform some of those chemo metric methods that we discussed.
I believe it was to two sessions ago, and there’s too many of these methods to list, and each of them serve a specific purpose in all these methods, like piecewise direct standardization or orthogonal to projection spectral regression.
They all are meant to be used in a way that helps minimize that.
You know, that differences those differences in spectra, but they also are typically derived from research that is looking at a specific problem.
So if you’re, you know, if you’re moving from a different spectrometer, like in case two, you know, there might be a specific method that was derived that, you know, is meant to handle that issue versus a different kind of of of calibration transfer issue.
So you’ve got to be careful when you’re you know, it’s not just throw everything at the at your model to, you know, help fix this issue.
You want to actually do your research and see what these methods were originally used for and and also just research to see if they did make a significant difference or not in calibration transfer.
I highly suggest there is a review that was published in 2018 and Applied Spectroscopy that is a review of calibration transfer practices.
I highly recommend if you want to learn more about the chemo metric methods involved to help, you know, alleviate the the problems that we see in calibration transfer, then I highly recommend giving that a a look over.
It is available if you just do a Google scholar search, you can find it.
It is a free access article, so you should be able to to read that.
And like I said, there’s too many methods for me to actually go through each one.
And we already covered a lot of them in the previous session.
So what I really want to discuss most is this machine learning, which is kind of the category that we’re in right now of how we overcome the calibration transfer issue.
And so our current approach for this is that we are using neural networks, right?
So we are, you know, still we’re utilizing an artificial neural network to build our models.
And not only that, we’re also incorporating data from multiple different instruments or spectrometers, and that includes things like including both seven 5750 and 751 data in the same model so that the actual neural network can learn and see what those slight differences are.
And we’ll be able to adjust based on the spectral input that it gets from whatever device it is then deployed on.
So that’s the theory behind what we’re doing is that these neural networks are much more capable of, of adjusting for all of this, these calibration transfer effects than any of these previous methods are, because these all really know the key biometric methods all require a lot of human input, a lot of human, you know, resources and and research to be able to determine which one is going to be, you know, be the best fit.
Whereas the neural network learns on its own how to identify those small differences in spectra from the different spectrometers, as well as across instruments of the same types.
So our all of our models will include data from many of 51 and many F seven fifties so that we can hopefully encompass all of the variability that we see between, you know, batches of spectrometers, essentially.
And so that’s where we’re currently at, what we’re currently researching and what researchers at spectroscopy are currently looking at are convolutional neural networks.
And so these convolutional neural networks, the new research that has yet to be published, I don’t know when it’s going to be published, but this new research that we’ve seen, I’ve seen the results for suggest that the convolutional neural networks might actually be able to almost completely eliminate the bias effects in new instruments.
So even with the neural networks, we still usually have to implement a bias correction after we have, you know, even even after we’ve encompassed incorporated all this data, we still have to, you know, every time we calibrate the instrument, we have to add in a small bias correction in order to line up those predictions.
But with the convolutional neural networks, a lot of the the preliminary data is showing that we can we can really almost negate that need to do that bias correction when we transfer the calibration to new instruments.
And we’ll be continue to obviously research the convolutional neural networks and see what kind of improvements, other improvements they might provide us.
And then we have, you know, in the bleeding edge, we have even more advanced AI techniques that are always being discovered every year.
And so these are things we just have to continuously investigate to see if this is going to help us minimize the calibration transfer problem.
There is no solution today, and I don’t know if there will be one in the future, but we are, you know, making sure that we are on that cutting edge, that bleeding edge to, you know, at least minimized to the best effect possible.
All these issues that arise when we transfer a calibration to a brand new unit.
So that’s the calibration transfer problem in a nutshell.
There is, you know, lots of ways you can go about actually addressing it as you saw.
But really what is important is that you need to think about it when you are thinking about implementing this kind of technology into your operation, because you what you don’t want to do is end up buying a singular F 750, creating a model that ends up working really well and then deciding you want to use it and deployed multiple instruments across, you know, a network of, of growers or of of other researchers and go to put that model on their devices and find out that it’s not performing nearly as well as yours is you once you decide at that point that this is something you want to do is you want to actually, you know, deploy multiple instruments from a model you built with a single instrument.
You’re going to have to think about model optimization.
You’re going to have to think about how do we actually optimize this to work on across all the devices so that it’s performing, you know, similarly across all those devices.
So that foresight is really important to determine what you what you actually you are going to require from the technology.
And you know, initially you’re going to you’re going to want to think about this before you even start building the model.
But then once that initial model’s built and, you know, if you that’s when you need to come back to this optimization thought pattern and think, you know what?
Okay, if this is something we want to implement, what do we need to do to ensure that it’s going to work across the board for all the new instruments we want to deploy?
And a lot of the times it is going to involve rework of the model to test new metric techniques, to add in new data from new spectrometers, new instruments, and just generally in improving model performance or, you know, you know, like I said, reducing the effects of that calibration transfer.
So that’s model optimization.
Model maintenance is similar in that it could potentially I mean, it is going to involve some rework of the model, but model maintenance is more just making sure that your model is going to be robust and it’s going to it’s going to perform well season over season, year over year.
And what you’re going to need to do to make sure your model is performing at that level is you’re going to have to have some kind of testing scheme where you’re performing regular testing against some kind of a benchmark and you’re going to need to establish standards for how you do that testing and how often you do that testing.
And it’s going to be different for every user and every organization.
You know, some people might need to test multiple times throughout the season.
For instance, maybe, you know, if you have multiple flowering events, you might have to actually calibrate, you know, at the start of each flowering event throughout your season or if you are finding that your accuracy in the beginning of the season starts to fade when it comes time to harvest, you might need to do a pre-season calibration and a mid-season calibration to ensure that your your devices are performing up to standard.
And when you do that, the offers those you know those offset calibrations.
It’s really encourage.
I highly encourage people to do those.
As you know a batch if you have multiple instruments you should be calibrating them all at the same time with the same fruit or the same, you know, whatever commodity it is that you’re testing so that you can actually identify if there are singular instruments that are predicting, you know, much more unreasonably, because then at least you’re using the same sample set across all the devices.
And then the other part of module maintenance is going to be that that inclusion of new data into the model, especially for seasonality, the variable of seasonality, i.e., you’re going to need to include that new data into the model every year and it doesn’t.
So here’s the thing, it doesn’t people might hear that and think, well, then what’s the point of this technology?
But when you’re using something like a neural network, like what we’re like what our, you know, our current base human metric models are, then you don’t need to rebuild the model in the exact same way.
You don’t have to actually go and add in the exact same amount of data, have that huge old process repeated every year.
It can be a much smaller effort, but a much more focused effort where you are, including maybe a little less data, but at least that data is representative still.
So you still have a wider range of.
So maybe instead of, you know, throughout the season collecting, you know, 100 fruit every week or whatever it is, you maybe you’re collecting ten, maybe you’re collecting five, but at least you’re still incorporating all of that variability into the sample set.
So they all you’re doing when you’re doing that, as you’re making your models so much more robust.
And the good news is that most research shows that a model after about three seasons and four seasons being the gold standard, but of after about three seasons, you start to lose that seasonality effect in a new model.
So all the models that we’ve built and house in some place, you know, upwards of 4 to 5 seasons worth of data because we want to make sure that year over year these models aren’t going to be consistently changing.
Now, climate change might throw a little bit of a wrench into that.
We have yet to see.
We don’t have research that has, you know, kind of dived into that subject to see if if we’re not seeing as as much robustness, you know, as years progress like we like the previous research has shown.
But in general, you know you don’t need to do this.
There’s going to come a certain point where you don’t need to keep adding data because all of that seasonality, although seasonality differences are going to be, you know, incorporated into the model.
And you also have to sow not just seasonality, but if you add in, if you your organization acquires new farms or if you as so, are now also in charge of another new region or a new varieties come into play, then that’s the kind of variability we need to be updating our model with in order to ensure that it’s predicting accurately.
But the first step is going to be just testing, making sure that it is, you know, and that will help you understand whether or not this model is robust across those kinds of variables.
So in the case of variety, if you know, on a physiological level, this new variety is actually quite similar to this other variety that you already have a model built for.
You got to test that first and you might find that actually the model predicts just fine, but you also might find that the model doesn’t predict fine.
And that’s when you need that’s when that that’s what will notify you that you need to perform this kind of maintenance and update that model with some some data from that new variety.
And really, what are both of these things are doing is we’re just trying to ensure that our predictions across all the instruments are similar and accurately accurate and robust as possible throughout the the use of the instrument.
And, you know, going into the future in your organization.
And that is the last thing that I have to really discuss about key metrics and model building at once here, you know, on a regular maintenance schedule and you’re making sure that you’re actively aware of this technology and what it requires, then these kinds of processes can be built into your organization and your normal routines.
And so it just becomes another part of your, you know, all the other technology that your that you’re utilizing in your organization. And once you get to this point, you know, it’s not as much work when you’re maintaining the model.
It’s just a little bit of effort every year to make sure that this instrument’s going to keep working for you as best as possible.
So as I already mentioned, we actually wrapped up this this webinar an in part we kind of combined part five and six because there really is not enough content to talk about both of them separately.
I could go into calibration transfer much more in-depth.
That actually could be an entire college course in and of itself.
So I highly recommend, if you want more information, just to look up some review articles like the one I mentioned.
If you want to learn more about the the issues that we face in spectroscopy and and how other researchers have overcome these issues.
But yeah, this is the conclusion of our Team of Metrics webinar series.
And thank you all so much for participating in this and I hope this is a valuable tool to help you learn and understand this technology.
I find that this is a very misunderstood technology and people go into it thinking that it’s, you know, either immediate out of the box solution or they go into it just not really understanding how, you know, the basics of how this process works, how we actually use this technology to help improve our practices, especially when it comes to quality testing.
Now, maturity, our harvest harvest indicating maturity clearance, kind of testing.
And so that might seem like a lot that that goes into it.
And there certainly is initially, but as I just told you, it really it really just requires a lot of careful planning.
And once you have, you know, actually sat down and come up with a really robust plan for how you’re going to go about implementing this technology, then the odds of you succeeding are extremely high and having a technology at your hands that will be able to non destructively within a matter of seconds measure internal quality attribute attributes, you know, saving you all that money, all that time.
It’s just it is very much worth it at this point And to be able to, you know, implement this kind of technology. So thank you all again for sticking around for that for this entire series.
I know it’s been a little bit long, but I hope that it’s been helpful if you do want more information like pricing, then you can obviously you can plug in this URL to get a quote for the device.
You can also always follow us on social media.
We are we post a lot of newsletters on LinkedIn and our social media also interviews with customers, a lot of cool content like that.
So please, if always feel free to interact with us over social media, go to our website.
You can find compilations of published research for all of our instruments.
You can find all of the specifications of the instruments.
You can also sign up for our newsletters, which have a lot of great content as well.
And then as always, you can feel free to call us as well.
So with that being said, I’m going to jump into the Q&A here.
And the first question is, can I have the slides of this event?
So, yes, we will be publishing not just the video of this, but we can also send out the actual slides for the PowerPoint as well.
The next question is, instead of virtual, is there any opportunity of physical participation sponsored by the host?
So I am actually located in Michigan and at our headquarters.
I’m a remote employee, but we are always happy to have visitors come check out our headquarters.
And if you are happy to be in the area, then feel free to give us a call or send us an email and let us know.
And we’re happy to show you around the facility.
So the next question, I guess it’s actually a good question that I did not address, but this question is how are models transferring from one equipment to another?
Are they transferred as a file that you transfer by email?
Can you transfer all the spectra collection and analytical testing?
So this is a great question.
I did not address this, but with our instruments specifically, and it’s going to vary based on the kind of instrument you’re using, but I will speak about 750 in the 751 all of our models are are housed in and what we call app.
It’s a software called App Builder that is available for any of our customers to use for free when they purchase a device.
You can also use it without purchasing purchasing a device.
If you go to our website, you can just download it.
But that software compiles the entire database, the Spectra Library and the analytical testing into a a project.
And from that project we can then compile that further into what we call an app.
And that app is a simple file that is just put onto the SD card of the device.
So all of our devices utilize SD cards to house the apps and also as a backup station for all of our data.
And so any data that you collect gets stored here, but also the apps that are utilized by the device which contain those predictive models are a simple file that is located on this SD card.
And the instrument firmware reads that file and it is able to then simply, you know, make those predictions, you know, within seconds on the device after you’ve loaded it.
And so when you’re talking when I’m talking about transfer of of an app from one instrument to another, it’s as simple as just transferring a file from this SD card or from my computer to a new instruments, SD card and other instrument cases.
I can’t speak, you know, for many other instruments, but in general it’s a very similar kind of situation where it’s typically a model that is housed in some kind of file that has been placed on the device.
And so when you’re transferring its assets that it is utilizing basically what we’re doing, we’re we’re using an app that is built off of spectra from 1f7 50 or F 751 that’s contained in the app.
And then we’re in basically feeding it an input of data from a new instrument or a new it’s not it doesn’t have to be necessarily a different spectrometer, but it’s a new spectrometer.
And so that’s where if we’re giving it new inputting data into this into this regression, right.
Since it’s become a metrics is taking that database and and simplifying it down into a regression equation essentially then that regression equation is taking input spectra from this new spectrometer and running it through that regression equation.
And if that you know, right regression equation doesn’t take into account or factor into account new spectrometers or variability in spectrometers, you’re going to see variability in your the predictions that come out of that that equation.
So that was a great question.
Thank you for that.
And that’s why I’m glad I was able to expand on that.
The next question is are you aware of a model on pomegranate?
So I on the top of my head, I know I’ve spoken to a couple of researchers about pomegranate before, and it really depends on what you want to be measuring with pomegranate.
What I would do is I would actually encourage you to reach out to me directly and we can talk a little bit more offline or, you know, over email about what you would, you know, if you are interested in a pomegranate model like what it is you’re interested in actually measuring.
So I’ll have Susie per my email and the chat function and feel free to send me an email and we can discuss that.
And then the next question is if you acquire a new spectrometer that is loaded with a given calibration, how is the user assured that the prediction is accurate?
So and let’s use us as a use case when we provide a when we say somebody buys a new F7 51 avocado party leader from us, when we ship it, we ship it with our built in model that we’ve built in our calibration that we’ve built.
Now our calibrations are built with, you know, multiple seasons worth of data and and they’re meant to be as robust as possible across multiple regions.
And, you know, as many assurances as the manufacturer can give as far as being transparent with, you know, here are our results.
You know, here is what we you know, here’s what our testing, our validation looks like.
The best way for the user to be assured that a prediction is accurate is to perform the the initial offset calibration, which is required for any new instrument that you purchase.
You’re going to need to perform that bias correction experiment because you we you know, there’s always going to be some kind of small bias.
As I mentioned, with our neural networks, there’s always a small bias.
Even when we calibrate it, we calibrate them in-house.
There’s always going to be some slight variability between the fruit that we have available here in the United States versus the fruit that’s available in Australia or in New Zealand or wherever, South Africa.
So therefore what you need to do and it’s and it’s also in our use your manual, that we, you know, that this is a not a recommendation but a require a really hard requirement of the actual technology.
And as I mentioned in the model maintenance, you need to have a testing scheme where you’re able to regularly, you know, test the performance of the device against analytical method.
And we provide, you know, in our in our calibration procedures that we that we give to our customers when they purchase a new unit.
We provide a, a actual, you know, a kind of visual guide that and a step by step instructions for how to do that and what technology to use and how to actually perform that testing, because we’re trying to what we want to do is standardize the analytical measurement that is done because all over the world, everybody performs their analytical testing different for any given commodity.
There’s really not a lot of standardized version for how fruits is tested.
And so what we are trying to do is make sure that our customers that are using our device are all using a standard method, because if they’re not, then then they’re going to perceive that their analytical method is different from what the device is predicting.
But that’s, you know, that that could be a that could be the result of your analytical testing being inaccurate or not being.
The same method that we are using to actually build the models.
So if you’re testing analytically, the the let’s use avocado, for example, when we build our avocado models, we use the traditional four stair oven or dehydration method to dry out avocados to determine dry matter content.
Now, there’s a lot of regions in the world that have, you know, dust that is decided because that takes too long.
They’ve found another method using a microwave that is much easier and faster for them to perform.
However, the downside of that is that that’s a completely different method than the the actual dehydration method by forced air, which is the, you know, the the more traditional method.
And so because there is discrepancies between those methods, you can’t expect the model to perform the exact same as your microwave because that’s not what the model was built on.
The model was built on results from dehydration using forced air.
And so there’s a lot of things to consider there.
The basis basis of what I’m trying to say is you need to perform testing on your end to A to be fully assured that the predictions are accurate, but you need to perform those tests using analytical methods that align with the analytical methods used to create the models.
And so in that way, we try to provide as many resources as we can to educate our customers on what the actual analytical methodology should be for performing that testing.
So that’s the long answer.
But hopefully that helped clarify for you what you were asking there about how to be assured of prediction accuracy.
And then the next question is show us how we obtain data for scientific scientific research.
I’m not sure what this question is asking, but I’m guessing it’s regarding the study that I was there, the the review article that I was discussing earlier in the slides.
And if you want the link to that, I’m pretty sure Susie has that.
I can pull up the chart here and actually check to see if Susie has posted that yet or not.
Yeah.
So actually on if you look at the chart, the second link are the sorry, the very first actual link, you can click on the second chart.
That is where you can find that review article that I’m referencing and that that review article has dozens and dozens of amazing references to specific publications that dive even further into these various calibration transfer techniques.
And then the last quote I just got one more question here is does Felix provide software for development of new calibrations?
And the answer is yes, we do.
We have a proprietary software that we call an App Builder app app Builder that is available for download from our website.
You can actually go and download it right now if you want to do if you just go to the support section of our website and then choose the 750, you can find the download link for app builder there.
And so that is a software that you can use to build all sorts of calibrations.
It does incorporate both.
You can either use pulse or an artificial neural network key metrics, and you can also change the type of spectra you’re using.
You can go from, you know, raw or just absorbance spectra all the way through second derivative and a multitude of other options as well.
So a lot of customization available.
So that and that is free for anyone that wants to use it.
So absolutely, we do provide that.
So the next question is looks like a support question regarding the 750 and struggles with App Builder.
I recommend to this person that if you are having issues with app Builder functioning on one of your computers versus the other one, please reach out to our support team.
We have a link and on our website just really quick reach out to them.
We’re extremely responsive.
We’ll respond to you in the same day or within 24 hours and you will you’ll get a response and hopefully a solution really quickly to the issue you’re having with App Builder functioning on one of your one of your two computers.
And then the last thing is a request for a certificate of participation.
I believe we can provide that.
And so if you do want a certificate of participation, just feel free to email us and we will provide that to you.
All right.
And that wraps up our questions.
So thank you, everyone, again for participating.
I hope this entire series was, you know, very insightful for you.
And I would say that if you have any further questions that I wasn’t able to address, please feel free to reach out to us, our support team or you can reach out to me directly and I’m happy to answer any further questions you might have.
So thank you all again for joining us and we will see you at the next webinar.
Related Products
Most Popular Articles
- Spectrophotometry in 2023
- The Importance of Food Quality Testing
- NIR Applications in Agriculture – Everything…
- The 5 Most Important Parameters in Produce Quality Control
- Melon Fruit: Quality, Production & Physiology
- Guide to Fresh Fruit Quality Control
- Liquid Spectrophotometry & Food Industry Applications
- Ethylene (C2H4) – Ripening, Crops & Agriculture
- Active Packaging: What it is and why it’s important
- Understanding Chemometrics for NIR Spectroscopy