Last week we wrote an algorithm spotlight about a video processing algorithm called Video Metadata Extraction. This week, we are pairing it up with a really cool microservice from an Algorithmia user called Car Make and Model Recognition that detects a car’s make and model from an image. This algorithm is currently in beta, but we’ve tested it out on a bunch of images and it works really well. We thought that it might be fun to take a traffic video and find the unique brands of cars that pass by a bus stop.

Before we released the Video Metadata Extraction algorithm you would have had to use the Split Video By Frames algorithm and then pass each image frame's path to the Car Make and Model algorithm.

Now, using the Video Metadata Extraction algorithm you can do the same thing in just a few lines of code.

## Step 1: Install the Algorithmia Client

This tutorial is in Python. But, it could be built using any of the supported clients, like Rust, Scala, Ruby, Java, Node and others. Here’s the Python client guide for more information on using the Algorithmia API.

Install the Algorithmia client from PyPi:

[code python]
pip install algorithmia
[/code]

You’ll also need a free Algorithmia account, which includes 5,000 free credits every month.

After you install the Algorithmia library, you’ll need to pass in your api key that you got from the first step into the variable “client”:

[code python]

import Algorithmia
client = Algorithmia.client("YOUR_API_KEY")

[/code]

## Step 2: What’s the Make and Model?

In one function we can pass in our algorithm that we want to retrieve the video metadata from and retrieve information about the cars in our bus stop video.

[code python]

``````def get_make_model():
video_file = "data://your_user_name/your_data_collection/BusStation-6094.mp4"
input = {
"input_file": video_file,
"output_file": "data://.algo/temp/bus_video_car_detection.json",
"algorithm": "LgoBE/CarMakeandModelRecognition/0.3.4",
"image": "\$SINGLE_INPUT"
}
}
algo.pipe(input).result``````

[/code]

In the above code, notice that we are passing in a video file that is in data collections which are data files that are hosted on Algorithmia for free. Even though this example uses data collections, the Video Metadata Extraction algorithm accepts other types of inputs (always read the documentation on an algorithm's description page for detailed information about the algorithm's inputs and outputs).

Next in the "algorithm" field of the input, we pass in the algorithm that we want the Video Metadata Transform algorithm to extract information on.

We then pipe the input into the algorithm which results in a JSON file in the "output" path that you specified.

## Step 3: Find the Unique Models in Video

Now that we have the JSON file we can discover information about the whole video, like what are the unique car types. Note that the Car Make and Model Recognition algorithm also returns the model type, the model, the model year and more so be sure to check out the algorithm page for more details.

[code python]

``````def get_json_file():
if client.file(video_file).exists() is True:
# Get JSON file from data collections         data = client.file(video_file).getJson()
# Get only highest confidence         item_list = [record["data"][0] for record in data["frame_data"]]
# Return only unique records by make of car         unique_items = [{v['make']: v for v in item_list}.values()]
print(unique_items)``````

[/code]

The above code simply checks to see if a file exists in data collections, then uses the `.getJSON` method to grab the data and then in the list comprehension "item_list" we take only the highest confidence of make and model that the algorithm detected.

In the "unique_items" list comprehension we find all the records of car makes that are unique, however you can change the target key to any of the other keys that exists in the dictionary such as "model" or "body_type".

Here is a sample record of the list of unique car types:

[code python]

{'make': 'Volvo', 'body_style': 'Wagon', 'confidence': '0.34', 'model': 'V60', 'model_year': '2011'}

[/code]

This example of combining the Video Metadata Extraction algorithm and the user contributed Car Make and Model Recognition algorithm can be used for targeted marketing of billboards, bus advertising space and more.

Next, go ahead and play around with different videos that are licensed as Creative Commons or even try your own videos. Perhaps you have some security camera footage and you’re curious about the different types of cars that move through a parking lot during the day in order to gauge pricing or maybe you want to discover information about locations found in video by using the Places 365 Classifier. Have fun trying out the different types of combinations available and happy coding!

Resources used in this recipe:

For your convenience here is the whole script to run which you can also find on GitHub:

[code python]

``````import Algorithmia
client = Algorithmia.client("YOUR_API_KEY")
def get_make_model(): video_file = "data://your_user_name/your_data_collection/BusStation-6094.mp4" algo = client.algo("media/VideoMetadataExtraction/0.4.2") input = { "input_file": video_file, "output_file": "data://.algo/temp/bus_video_car_detection.json", "algorithm": "LgoBE/CarMakeandModelRecognition/0.3.4", "advanced_input": { "image": "\$SINGLE_INPUT" } } algo.pipe(input).result get_make_model()
````def get_json_file(): video_file = "data://.algo/media/VideoMetadataExtraction/temp/bus_video_car_detection.json" if client.file(video_file).exists() is True: # Get JSON file from data collections  data = client.file(video_file).getJson() # Get only highest confidence  item_list = [record["data"][0] for record in data["frame_data"]] # Return only unique records by make of car  unique_items = [{v["make"]: v for v in item_list}.values()] print(unique_items) get_json_file()````

[/code]