⚡Quick start guide
In this guide we will go over the key steps needed to create high quality datasets from video.
Last updated
In this guide we will go over the key steps needed to create high quality datasets from video.
Last updated
Step 1 : Set up your account
In order to create your Lodestar account you will have to accept an invitation from an existing project owner or admin. This email invitation will be sent from support@lodestar.ai with the subject "Update Your Account" and it will contain a link that you can use to input your password and log in to Lodestar for the first time. If you cannot find the email please search for it as it may have been placed in your spam folder.
Step 2 : Create your first project
After logging into lodestar you will land on what we call the project explorer, this is where all your future projects will live. At this time no projects have been created yet so the list is empty.
You can easily create a new project by clicking the first tile in the project explorer, this will open a pop up where you can enter your project's name and a short description and click the create button.
Once you've filled in the name and description of your project a new project will appear in the list.
Now that you've created your first project you can click on that tile to open your project's dashboard.
Step 3 : Open the project dashboard
Within the dashboard you can manage your project, we will go into more depth about the different components present here in a later guide but for now here are the basics : Each project has its own dashboard which can be used to upload media into the annotator, manage the categories you want to label, assign a GPU to the project for AI powered labelling, launch the annotator and review your annotations in the annotation explorer.
The dashboard is also the home of the data science tools such as Jupyter and Graphana which are needed for model training and more advanced dataset management. This is a topic for another guide.
Step 4 : Upload your video
From the project dashboard you can click the upload media button in the second panel from the top which will open the project's file management system.
Here you can upload videos that you want to annotate by dragging and dropping them on the folder named "Add to annotator". Once the upload is complete, files in that special folder will get automatically prepared for labeling, this can take up to 30% of the length of the video to complete.
Once the file is ready to annotate the first frame will be displayed in the project dashboard and the statistics will populate on the right hand side.
Step 5 : Accelerate the project
When your media is shown in the project dashboard that means its ready to annotate, you can now accelerate the project by clicking the toggle on the dashboard to attach a GPU to the video annotator. This is needed to generate AI predictions and frame suggestions. If there is a GPU available in the system then your project will become accelerated within 15 seconds. If not then you will receive a message saying that your request has been queued. You can still launch the annotator but you will have to wait for a GPU to become available before your AI starts to train and generate predictions and suggestions.
Step 6 : Launch the video annotator
Now that your data is ready to go you can start annotating. Clicking the golden annotator button will open the project's video annotator.
Now that your media is loaded into the video annotator you can start annotating and turn your video into high quality datasets and models.
Step 7 : Create a new category
In the video annotator you will find the category menu on the right hand side. which allows you to create new categories for your annotation project. Just click the + in the top right corner and enter the category name.
Once you've created your categories you'll see that there will be a highlight on one of them, this shows that this category is selected and if you were to start drawing this is the category that would be selected.
Step 8 : Label your videos manually
If you need to delete a bounding box, select it and press the 'D' key, pressing it once will turn the box dark and turn it into a background annotation. To remove it completely press the 'D' key again.
Once every object of interest in the frame has been labeled you have to click the green button to confirm the frame or use the shortcut 'F' before moving on to the next frame. You will not be able to annotate further until you confirm to the AI that the full frame has been annotated.
At any time if you would like to see how many annotations you and your team have made you can click on the down arrow next to each category to open the category panel. My annotation count refers to the annotations you have made, while all annotations takes into account all the annotations that have been made by all users in this project for a specific category.
It takes 20 annotations total for the automatically trained model to turn on and start generating predictions and suggestions. For this manual phase you can use the video controls to navigate through frames. or the 'E' key to move ahead 10 frames at a time.
To switch videos manually refer to the hint at the bottom of the guide and use the link.
Step 9 : Label your videos with AI predictions and suggestions
Once you have annotated 20 examples of any category the labelling AI will automatically get trained and start generating its own labels as well as suggested frames that it would like the user to verify.
Predictions made by the AI can be distinguished by the colored bar below the label which represents that predictions' confidence level.
Before you start editing a frame you can use the 'C' key to filter out model generated labels by confidence level. This is useful at the start of a project to quickly remove low quality predictions.
If you want to see the list of keyboard shortcuts you can always open the shortcut panel on the left hand side of the video annotator
AI powered suggestions The 7 thumbnails below the video player are suggestions made by the AI. Frames that have been chosen by the AI because they contain valuable information that will increase the model's performance. This intelligent frame selection is known as active learning. Clicking any of the thumbnails will navigate to that frame, suggestions can come from any video, the X in the top right corner can be used to dismiss a suggestion.
The golden squares indicate which part of the data set these suggestions are coming from. In this case we see that there are 10 minutes of data represented by the lighter squares. The golden squares are all in the first half of the video data set.
The confidence graph in the bottom right corner shows the confidence of the different predictions on screen, you can use the 'C' key to change the threshold below which predictions are not shown.
Step 10 : Invite your collaborators
Now that your project is set up and the labeling AI is generating suggestions and predictions it is time to invite the rest of your team to the project.
In the project dashboard there is a panel titled collaborators that you can use to invite team mates to your project using their email. They will be able to access your project's dashboard and create their own projects. Only you the project creator can invite people to your project.
Video annotator links can be used to share specific datasets, images or videos. Pressing P while the annotator is selected will copy the link to your clipboard.
Server.lodestar.ai/annotator#datasetId=8&streamId=6&frameNumber=83
Server should be replaced by your server's name
FrameNumber can be modified to access a specific frame
StreamId can be modified to look at another video
DatasetId can be modified to point to a custom dataset
Step 11: Review your annotations
Now that you and your teammates are working together on this project you can use the annotation explorer menu item on the left in the project dashboard to review your existing annotations and manage the work of your team members.
The annotation explorer is the interface for managing your annotations, reviewing them, collaborating with your team members and creating training materials.
At the top of the annotation explorer you have a set of filters that you can select to narrow down what annotations are displayed, by category or by property of the annotations.
You can use the textbox, flagging button and exemplar button to aid you in your review process. All three of these mechanisms are available to the annotators in the details panel on the left, that way they can flag and leave notes on any ambiguous edge cases not covered in the training materials.
Using the flagged filter in the annotation explorer I can take a look at all the edge cases that have been signalled by the annotation team as ambiguous and address the specific issues. I can make a positive edge case into an exemplar by clicking the make exemplar button which will add it to the training materials.
Exemplars will be displayed on the project dashboard in the category panel as examples of good ways to annotate specific edge cases. This method enables new annotators to quickly get up to speed on how to annotate complex cases in a consistent way using data from the project itself. Optimizing the reviewer's time as they can evolve the labelling guide over time by simply looking at all the confusing cases and ensure high data quality throughout the project.
By selecting your category in the right hand menu and dragging and dropping directly on the video you will be able to create annotations. You can click on a bounding box to select it and then use the blue corners to modify the box