4.1 Create Annotation Report Task
Last updated
Last updated
Adding labels to digital photographs is known as an image annotation. Creating an annotation set is the first step in improving the efficiency of the results.
Images must be labeled for functional datasets in order for the training model to know what the key components of the image are (classes), which it may then use to recognize those classes in brand-new, previously unseen images.
To create an Annotation Report Task, create a task first and select the type of task as an annotation report from the options provided. Here, we will consider the annotation task means a manual annotation task.
Enter the name of a task in the window provided. The new task will be stored in the selected project or if the user wants to create a new project, he can create a project by clicking on the "New Project" button to save the newly created task into the newly created project. Click on the "Create Task" button from the left-hand side stepper.
An imageset with a number of images is uploaded as the next step. Select the imagesets from the existing list or the user can add a new imageset by clicking on the "New Imageset" button as shown in the image.
Note: Select the images in .jpg, jpeg, or .png formats only for performing Annotation Report Task.
The user can select the existing imageset or create a new imageset for creating an Annotation Report Task. To create a new imageset, click on the "Imageset +" button in the top right-hand corner as shown.
Click on the "Setup" button from the stepper to proceed with these images.
Now, add class labels to detect the objects from the images and click on the "Setup" button from the images as shown in the image below.
Click on the "Start" button to start the process. The status of the task will be "Creating" as shown in the image below which will be changed to "Completed" after the completion of the task.
Now the report of the completed task is available in this step. The task details will be also displayed on the screen as shown.
The details of the task include the name and date of creation of a task, imageset, and number of images used for the task, Status of the task, and class labels detected are displayed in the task details.
Detailed Report includes the details of the images analyzed, objects detected, total images with the severity level.
The user can annotate these images manually, to detect all the objects as shown in the image below.
After manual annotation, the detailed report will be altered accordingly as shown in the image.
The manually added objects will be considered in a report generated. The advantage of this manual annotation is that all the remaining or non-detected objects can be annotated.
4.1.5 Import and Export Multiple Format Files while Annotating Images
The user can import and export different file formats while annotating images.
PicStork supports for a following file format to import:
VGG: Karen Simonyan and Andrew Zisserman of the Visual Geometry Group (VGG), Oxford University, introduced VGG models as a sort of CNN architecture, which produced outstanding results for the ImageNet Challenge. The output produced by these images is in the form of CSV or JSON.
XML: An XML file is a file that uses the extensible markup language to arrange data for storage and transmission.
CSV: A text file with a specific format known as a CSV (comma-separated values) file allows data to be saved in a table-structured format.
TXT: Text annotation is a machine learning technique that identifies the features of sentences by labelling a text document's various content parts. Even for humans, human language can be difficult to understand, despite how intelligent technology can get.
PicStork supports for the following file formats for exporting the files as shown in the following image.
VGG: Karen Simonyan and Andrew Zisserman of the Visual Geometry Group (VGG), Oxford University, introduced VGG models as a sort of CNN architecture, which produced outstanding results for the ImageNet Challenge. The output produced by these images is in the form of CSV or JSON.
Labelme: LabelMe is an actively developing open-source graphical picture annotation tool that was inspired by MIT CSAIL's app of the same name, which was published in 2012. Along with polygon, circle, line, and point annotations, it has the ability to annotate photos for object detection, segmentation, and classification.
XML: An XML file is a file that uses the extensible markup language to arrange data for storage and transmission.
CSV: A text file with a specific format known as a CSV (comma-separated values) file allows data to be saved in a table-structured format.
TXT: Text annotation is a machine learning technique that identifies the features of sentences by labelling a text document's various content parts. Even for humans, human language can be difficult to understand, despite how intelligent technology can get.
4.1.6 Color Picker:
This tool provides the option to change the color of annotation on the image. The user will be able to choose a color according to his choice or according to type of object.
Click on the class mentioned as shown in the above image. Click on the colored dot to change the color. The color palate will open to select the desired color. Select the desired color which will be reflected to the objects annotated for that particular class.
This tool will be helpful for identifying all the objects of a selected class easily and quickly.
The same color picker will be useful for training task annotation also.
Import the zip file for the windmill application with the option "Import Zip" option to upload the images. Select the application as windmill inspection.
Upload the zip folder with files and create a task in the normal way. Complete the process to create an annotation task.
Once the annotation task is completed, then open a detailed report to annotate the images manually.
Add the classes to detect the objects from images as shown in the following image.
Complete the manual annotation of all the images uploaded and open the detail report for detection of objects.