How we're using AI to improve trust and efficiency in the mobility sector (slightly nerdy overview)
A post by Ravin's VP Product, Neil Alliston
Back in February I gave a talk at Move 2020 in London which, given it was one of the last of the day, was surprisingly well attended.
The presentation was called ‘Using computer vision to automate vehicle inspections’ and, from the questions, it seemed the audience - even the non-technical folks - were really interested in opening up the bonnet and seeing how our technology actually works.
So, this blog post is an update of that explanation, taking into account everything that’s happened at Ravin between then and now (and a LOT has happened in the last six months!).
The problem we’re solving
In essence, we’re making it easier to create a vehicle condition report and, with our AI analysis, we’re making those condition reports trustworthy - even when done by a non-expert inspector.
This is enabling:
Rental car companies to fairly charge their customers (and avoid charging the wrong customers).
Captives to empower lessees to complete end-of-lease inspections, without needing to rely on third party inspectors.
Dealers to make informed decisions about part-exchange or trade-in deals.
Insurance companies to quickly triage claims from self-reported customer inspections.
It all starts with a scan
A scan is when a camera takes a series of images of the vehicle. This can be from stationary hardware, such as a set of CCTV cameras or from a smartphone. In order to create an accurate model of the vehicle, we typically use somewhere between 40 and 200 images from all different angles, capturing a full 360 degree view of the vehicle.
This process is modelled on a human vehicle inspector. The last time you took your car in for a repair, the mechanic will have ‘scanned’ your car visually. Walking around to see parts of the vehicle that might not obviously have been impacted.
Asking a user to take 40 pictures on their phone would be torture. So instead we’ve built a clever little trick into our mobile app that makes it seem to the user that they’re taking a video whereas, in reality, we’re capturing images covering the different parts of the vehicle.
Then we build a model of the vehicle, on the spot
Rather than maintaining a database of 3D models of all vehicle makes and models which would be completely unscalable, our algorithms are able to take in the images from the scan and instantly build a unique 3D model of that vehicle. We’re running a powerful algorithmic pipeline on the device in the browser (first pass) and in the cloud (detailed analysis) to do this and the modelling has been refined by running it on well over one million vehicles that we’ve scanned.
This means that, we believe, we have the largest database in the world of well modelled vehicles to train our damage detection algorithms on. We’re not just taking in images of damaged vehicle parts. We see the whole vehicle and, particularly from our work in the rental car space, we also have lots of examples of previous scans of the same vehicle before it was damaged.
This gives us the perfect dataset for training our damage detection models.
Finally, based on the 3D model, we’re able to find and categorize issues on the vehicle.
Once we’ve successfully created the 3D model of the vehicle, we run a series of deep neural networks to find, verify and categorize damage.
Our algorithms are trained to validate damage they’re scanning and categorize it: What type of damage is it? Which part is it located on and where on the part? What’s the severity of the damage?
All these answers help our customers to make decisions faster. For example:
Choosing whether to charge a renter and how much they should be fairly charging.
Preparing repair work ahead of time or adjusting the part exchange offer.
Triaging claims into quick or more involved repairs.
To do this we use a combination of supervised and unsupervised machine learning. We have professional vehicle inspectors who will QA algorithmic suggestions and will adjust them based on their expert knowledge. All these adjustments feed back into our learning models, making our algorithms smarter and more accurate.
Putting it all together
Once we know the condition of the vehicle we put it all together into a condition report which is either available to review in our desktop portal (Ravin Eye), or via a programatically exportable report (API) or a simple PDF.
What’s smart about our reporting system is we understand the condition of the vehicle across its full lifecycle where we’ve seen multiple scans. This means that if the condition changes e.g. new damage appears, or damage is repaired, then our system flags that too.
Clients come to us knowing that traditional inspection flows will change dramatically in the future. Ravin has developed solutions that open up a new realm of possibilities in rental car, remarketing and trade-in. The innovative industry thinkers who see the current ways of processing inspections as set in stone will be behind those that see AI forever changing vehicle inspections.
A first mover advantage is available for those willing to get to work on it now.