FAQ
Here are a list of frequently asked questions
Q: What should I be annotating?
A: Anything that you could train a human to recognize in under an hour, you can use Lodestar Navigator to recognize automatically. Take a look at our quick start guide to get some more information on how to label video using Lodestar.
Q: Should I include “this” in the bounding box?
A: The way you place bounding boxes will inform the AI how it should place its bounding boxes. As long as you decide what you’re going to include and annotate consistently , the AI will be able to make the same bounding boxes. Depending on what your downstream application is you may make different choices about what gets included. For example, maybe it's important that the AI can recognize the position of the full object even if they are in part hidden, if that’s the case then you should label the full object even if it is in part hidden. If you would rather your AI identify only the objects that are fully present on the screen then you should be annotating only the full objects.
Q: The AI is making a lot of predictions on every frame, what’s going on?
A: This is normal at the start of a project when the AI is still figuring out what it should be detecting. You can use the 'C' key before you start editing a frame to delete all machine annotations below a certain percentage confidence; once you start editing the frame you won't be able to change the confidence threshold. You can hit C multiple times to see different confidence thresholds.
If this issue does not go away maybe the category you are trying to recognize is not clear enough for the AI to detect yet and it will require more labels for it to become competent with it.
Q: I added a new category but I am seeing predictions that are not taking that category into account, why is that?
A: The AI model is constantly being trained and learning new things, however the predictions that you see on your data could be slightly outdated. For example, if you train the model to recognize only strawberries, this first model that detects strawberries will go through your data and make predictions. If you then add a new category for blueberries this will create a new AI model that can detect both blueberries and strawberries, but it has not had the time to go and make predictions. So if you navigate to a new frame right away you will see predictions made by the older model that did not know that it is supposed to be detecting blueberries. Leave the project in an accelerated state for some time for the newer model to make its predictions.
Q: I corrected a prediction made by the labeling AI but it is still predicting that object incorrectly, why is that?
A : The predictions are staying on the data after the labeling AI has made them even if the labeling AI has been updated since then. So there will be predictions that you see from older models even if you have since then trained the newest model on that exact object.
Q: What is cross validation accuracy?
A: The percentage you see in the project summary is called the Cross Validation Accuracy, it's a test of how good the labeled dataset is at training models. We train a model on one subset of the data and then use it to detect objects in the rest of the data. 100% would mean that the model trained on a part of the dataset makes perfect bounding boxes on every single object and does not miss any objects. CV calculations start at 100 annotations including 20 in each category. After the 100th annotation, allow the AI about 1-3 hours to calculate the accuracy; this will depend a lot on the system and the amount of data.
Q: Why is my Cross Validation Accuracy going down?
A: Cross Validation Accuracy is a test of how good the localization model is. We automatically train the model on your labelled data and use new data coming in to test the model. 100% would mean that the model trained on a part of the dataset makes perfect bounding boxes on every single object and does not miss any objects. The reason for the accuracy of a dataset to go down is a sign that the problem has become more complicated, more difficult examples have been introduced but not enough to ensure that a model trained on part of the data can detect all of these new examples. Or perhaps the quality of labeled data is not very high. Identifying objects in two different videos taken with two different cameras for example is a more complicated problem than identifying the same type of object as seen by only one type of camera in only one type of light.
Q: Can I upload (images/maps/X-rays/satellite images) into my project?
A: We provide a jupyter template for converting images into the right format so that you can upload images in your project. We do not currently support X-ray data or map data natively but if you can turn it into a video then our video annotator can be used to annotate it. For satellite images we have used sliding windows in the past to turn one large image into a short video. We plan to expand the different types of data that we support in the future.
Q: How can I see the annotations that I’ve made?
A : If you click on a project you will see a menu option on the left that says annotation explorer, you can open the annotation explorer to see your annotations. You can also use Jupyter notebooks to access your annotations programmatically through the data science workbench located in the project dashboard.
Q: How can I see the videos that I’ve uploaded?
A : The filebrowser automatically creates a folder named "uploaded" once your first video is fully ingested into the project. This folder contains receipts of all the files that have been added into the project. Suggestions will appear from all of your videos to guide you to the highest value data to train your AI to recognize the categories you’re interested in detecting. Once you’ve followed a suggestion you can navigate within the original video file. If you want to navigate between files manually you can look at the shortcuts or quick start
Q : After I upload my videos they immediately disappear, I can't get them to stay in the file browser, why is that?
A: This is normal, when you add a video to the annotator it gets processed and then it’s removed from that folder. There will be a new folder named "uploaded" which will hold receipts of all previous uploads into this project. Suggestions will appear from all of your videos to guide you to the highest value data to train your AI to recognize the categories you’re interested in detecting. If you want to see your videos in a given project or navigate to a specific video see the quick start guide for the steps to follow.
Q: Can I use Lodestar Navigator to make something other than bounding boxes? Polygons? Markers? Semantic segmentation? A: As of today Lodestar only uses bounding boxes; it is the fastest way to get a good amount of data and get started on an AI project. We are planning to add segmentation to our product at a future date. If you have a use case that you believe requires some other tools you can contact our product team and we can keep you updated as we release new annotation modalities. ([email protected]).
Copy link