I'm surprised that rotating scanners are still used. It's been twenty years since Velodyne built their first one. They work OK, but cost too much. I was expecting flash LIDAR or MEMS mirrors to take over. Continental, the auto parts company, bought the leading flash LIDAR company over a decade ago, but the volume market a big parts company needs never appeared.
Waymo is still using rotating LIDARs even for the little ones at the vehicle corners. Those need less range. There needs to be a cheap, flush-mounted replacement for those things. The location is too vulnerable. Maybe millimeter phased array radar mounted behind Fiberglas body panels.
Waymo needs to solve that problem before they do New York.
The LIDAR on top may not be a problem. Insisting that it has to go away to "look like a car" is like insisting that cars had to have the form factor of horse-propelled buggies. Early cars looked like buggies, but that didn't last.
One big advantage of pulsed LIDAR over continuous is that the interference problem between identical units is much less. The duty cycle is tiny. Data from one pulse round trip is collected in less than a microsecond. Just put some randomization in the pulse timing and getting multiple conflicts in a row goes away.
Waymo have in house radar, I think in the 70GHz gap in the absorption spectrum. They're pretty obvious as sort of paperback book sized planes, mounted near other sensors IIRC.
The old Velodyne units were actually susceptible to damage if you left two units running right next to each other. I did hear a proposal at some point for a different but similar unit to use GPS time to sync the rotations of all the units we had live so they wouldn't be pointed at each other, but in practice it seemed to not be a huge issue.
BTW I once gave you guff about continuing to bring up Conti's flash LIDAR, and in retrospect I wish I hadn't, I really enjoy your contributions here.
The SNR for flash Lidar is really low because you spread the beam out over such a large area.
Most automotive Lidar already operate in a “photon starved regime”, ~200-300 photons per return[0]. If you spread that over the entire scene, your snr drops quickly.
This forces you into 1550nm, and a large detector array and high power laser at 1550nm is extremely expensive.
As for MEMS, it’s been a while but I think FOV/steering angle range , steering speed and even maximum beam power were concerns
EDIT: my Lidar friend Jake reminded me that the appetizer size is also an issue with MEMS- smaller aperture = less light collected = lower SNR
> Most automotive Lidar already operate in a “photon starved regime”, ~200-300 photons per return[0]. If you spread that over the entire scene, your snr drops quickly.
Translating: Normally you have a large single sensor per laser, which makes measurements at a very high rate. With flash lidar, you split the sensor up like an image sensor. In a normal image sensor, each pixel can collect light for a long time, but if you do that with lidar you have no distance resolution. The sensor is sitting idle 99% of the time, and you pay in sensitivity and accuracy.
Array sensors, MEMs, and phased arrays all struggle because they're all really good at small-angle differences, while the reason for scanning lidar is large-angle differences. Maybe one day we'll start making curved dies and it'll be easier to have a really wide FOV without needing multiple sensors.
> I'm surprised that rotating scanners are still used. It's been twenty years since Velodyne built their first one.
They're even older than that. SICK have been pointing laser range finders into spinning mirrors since about 1995 - albeit mostly for industrial safety systems which can be quite price-insensitive.
There's a few things to know about LIDAR to understand why spinning lasers make sense.
First of all, anything emitting a cone of light encounters "inverse square dropoff" - where moving twice as far away means you get a quarter of the light, per unit area. This is most visible with flash photography at night - but it also applies to LIDARs. And in an automotive application, ideally you want to be able to sense things 100m away. Illuminating a laser spot is much more practical than illuminating everything.
Secondly, whatever light source you use has to be eye-safe. And sure, IR has safety advantages over visible light here - but a light source bright enough to illuminate things at a 100m distance would be very hard to make safe, even with the advantages of IR. As a scanning laser never lingers in one point for long, it can safely be much more intense.
The third thing to know is whatever light source you're using, you're in competition with the sun. Sometimes the sun is low in the sky and directly dazzling your sensors. Other times it's illuminating the same things you want to illuminate. This means you can't make up for a weak light source and inverse-square dropoff with clever signal processing.
And finally, the makers of these cars envisage a future where every single vehicle on the road is using this technology. So there's also a risk of the reflected returns of two different vehicles interfering with one another. Even rotating LIDAR can be vulnerable to it, but flash LIDAR is particularly vulnerable.
Meanwhile, automotive companies aren't scared of moving parts. A car has loads of spinning parts already; they have mastered the art of making spinning things that can keep spinning for thousands of hours.
> Meanwhile, automotive companies aren't scared of moving parts. A car has loads of spinning parts already; they have mastered the art of making spinning things that can keep spinning for thousands of hours.
Almost an understatement.
A typical car wheel hub with a 20-27 inch tire diameter has experienced around 75-100M full rotations by the time it reaches 100K miles.
Meanwhile, the engine probably has revolved ~5-10 times more during the same time.
Costs have dropped dramatically in the past 20 years and continue to do so.
> There needs to be a cheap, flush-mounted replacement for those things.
Why? Corners are the optimal mounting position for maximum visibility. It allows the car to -in-effect- see around corners in ways no centrally mounted sensor can.
> Waymo needs to solve that problem before they do New York.
Have you ever seen the corners of a car that has been parked in a big East-coast city? They will sustain damage during the course of normal operation and storage, and many people will not stop and leave their insurance information, especially if the damage is perceived as minor and happens while the car is parked and the owner not present. Currently, the corners of a car are relatively non-critical to its function and usually not too expensive to repair. If both of those change, we'll see more expensive damage that is more challenging to repair as well as less likely to be handled by the responsible party.
Also, having the sensors stick out from the corners makes the car's collision box and turning radius bigger. That doesn't help in any tight situation, but I imagine that's not that different between e.g. SF and New York. What is different is the sheer volume of cars and pedestrian activity.
Right. It seems to have been Waymo's decision to have zero blind spots around the vehicle perimeter, even if that means having the sensors stick out.
Cruise had an accident where another vehicle knocked a pedestrian into a Cruise car, and the pedestrian was dragged. Cruise lost their California DMV autonomous license for that. So there's a good case for full perimeter coverage.
Humans don't have that. The same week as the Cruise incident, a NYPD tow truck dragged a pedestrian some distance because they were in a blind spot for the driver.
They don't stick out that much. The geely vehicle has front sensors recessed just above the front wheel well, without much additional side clearance. Either way, a collision involves regulatory filings, downtime, and sensor recalibration even if no damage is sustained.
Waymos sometimes stop briefly in parking spots while waiting for assignments, but they don't really park as such except in special lots. The big problem I have seen is they tend not to always pull to the curb when releasing passengers and if a door is left even slightly ajar then they will sit there requesting the door be closed even if they are blocking a lane with many cars behind them beeping.
Waymo's custom designed 6th generation vehicles[1] with self-closing doors were expected to enter service this year, but have [probably] been put on indefinite hold due to tariff issues
Lidar obstacle detection algorithm from a Git repo leaked onto Tor
This is a drivable region mapping (obstacle detection) algorithm found in what appears to be a git repo leaked from an autonomous vehicle company in 2017. The repo was available through one or more Tor hidden services for several years.
The lidar code appears to be written for the Velodyne HDL-32E. It operates in a series of stages, each stage refining the output of the previous stage. This algorithm is in the second stage. It is the primary obstacle detection method, with the other methods making only small improvements.
The leaked code uses a column-major matrix of points and it explicitly handles NaNs (the no-return points). We've rewritten it to use a much more cache-efficient row-major matrix layout and a conditional that will ignore the NaN points without explicit testing.
This is an amazingly effective method of obstacle detection, considering its simplicity.
I worked on an automotive FMCW LiDAR that didn't quite make it to market. Cool technology but it was difficult to scale the cost down, which is pretty important for automotive. Margins are very low in that market
Thanks for sharing that, clearly there is something at least worth investigating here. The concern about long-term exposure to LiDAR beams is not extensively studied. Current regulations seem to primarily focus on short-term exposure limits, and there is a lack of comprehensive testing under real-world conditions
Laser safety ratings are based on what would happen if the laser was pointed directly at your eye continuously. In the case of general traffic each lidar is scanning in different direction and while manufacturers try to make the energy produced by their lasers instantaneously brighter than the sun in one specific wavelength but damage to your retina is caused by excessive heating and doesn't care about what wavelength the energy is coming in at in the IR except to the extend that it can get to the retina or not. In your morning commute I'd worry less about the lidars than the much larger amount of invisible IR radiation given off by the sun. And I'd worry much less about the sun's IR radiation than the sun's UV radiation, wearing sunglasses during a 2 hour commute is best for your eyes.
Good question! You're right, this is surprisingly hard to Google. It looks like the FDA is responsible. I would not have guessed that!
The National Highway Traffic Safety Administration (NHTSA) would have been my guess, but I'm not finding much there. They have a spec for LIDAR speed measurement devices, and one for the required sensors in vehicles, but nothing on the the output of said sensors.
> For manufacturers of laser products, the standard of principal importance is the regulation of the Center for Devices and Radiological Health (CDRH), Food and Drug Administration (FDA) which regulates product performance. All laser products sold in the USA since August 1976 must be certified by the manufacturer as meeting certain product performance (safety) standards, and each laser must bear a label indicating compliance with the standard and denoting the laser hazard classification.
Sorry, I don’t believe the FDA is doing anything more than stamping a Class 1 or class 2 sticker on component parts. They are not testing LIDAR arrays in situ under simulated driving conditions
I would like to see crash test dummy style research around vehicular LIDAR
Crash tests in the US are also technically on the honors system too, but NHTSA does test the most common models. But many they don't. For example, the Cybertruck.
I know we are talking about car type lidar, but the iPhone Pro has a type of one and gets a depth map of photos. So you’re shooting it everyone you are taking photos of.
I don't think the Lidar in apple's stuff is actually a lidar, I think its a structured light sensor.[1]
What do I mean by that? lidar sends pulses of light and works out the difference between emission time and arrival time to work out how far the pulse has travelled.
The structured light sensor emits a pattern or dots, and any distortion of that can be used to compute the shape of an object.
When you're measuring something as precise as the time it takes light to bounce off something in front of you, you need really precise optics and electronics. Also, automotive lidar is still in the realm of low-volume specialty equipment, so there are little to no economies of scale in manufacturing this stuff.
The issue isn't one of fixed vs rotation, it's that radar can't fundamentally achieve the resolution necessary to distinguish important features in the environment. It's easily fooled by oddly-shaped objects, especially concave features like corners, and so while it's great for answer the question of "am I close to something" it's not reliable for telling you what that something is, especially at longer ranges.
I believe automotive radar has a cone of sensitivity that is read as a single "pixel" worth of data. Even if the radar spun like lidar, the radar cone of sensitivity is thousands of times wider than the lidar beam so you can't make much of a picture with radar.
IIRC the data coming out of the Conti radars was preprocessed to give bearing, distance, and size of an object in the FOV of the unit. I don't know if I ever saw the true raw data out of one of them, but I'm curious what it looks like.
Ye I have a hard time imaganing how a car radar image looks like.
On boat radars it seems like the radar have really high resolution (can see much further than lidars) but have worse accuracy. I.e. things looks like blobs.
It's cooler than that these days - under the paint are antennas plated? printed? onto the skin panels that are tuned to absorb specific frequencies of interest.
I reckon it's probably not that bad, there are big surfaces that are almost normal to what would be incoming radio energy. Stealth shapes tend to reflect energy in a completely different direction from the source.
Very high tech radars can generate amazing imagery, but they'll never top what lidar can do. Conceptually they're both doing the same sort of thing using EM radiation, but lidar uses a much smaller wavelength which gives it an intrinsic resolution advantage. Particularly at distances and with hardware sized relevant to cars.
In a recent No Priors podcast with the Waymo Co-CEO Dmitri Dolgov, he talks about how they evaluated just driving with cameras and how it isn't good enough for full autonomy and doesn't meet their bar for safety [1].
I find opinions like this to be almost as crazy as saying that the earth is flat because Waymo has a working, truly self-driving taxi service RIGHT FREAKING NOW while Musk is still promising to have one some day in the hazy future while NEVER making a single vehicle that can actually drive without someone in the car. Musk rejecting LIDAR means that he fundamentally doesn't understand the technological challenge of self-driving despite have access to the world's experts OR he is cynically using false promises of self-driving to pump up Tesla share price. I know which one I think is true.
I think anyone who listens to Musk talking about something they themselves know a lot about quickly realises that Musk's skills are elsewhere. He can motivate and market the hell out of a business whilst snorting more ketamine than a herd of horses but he is not a technical genius by any means. He pays people well to agree with him and fires them when they don't, so I suspect that his companies that produce better and more stable products do so because he micromanages them less.
Karpathy said in some podcast that Tela uses LIDAR in training, and by doing this they can get a lot of the benefits.
Not sure that all off the "worlds experts" agree with you that you HAVE to use LIDAR.
Rate of progress for FSD has been very impressive lately. I personally think that its very plausible that Tesla might beat Waymo to large scale location independent autonomous driving.
The stats on the latest FSD are still terrible. It still needs human intervention far too frequently and is no where near being able to run without a human in the car or Tesla accepting liability for crashes.
Waymo's recent experiment with multimodal models and a purely camera based system (EMMA) validate some of the claims that using LIDAR data in training does help. Pretty neat! Still not as good as a LIDAR + RADAR based system.
From person experience, the state of the art Tesla vision FSD still can't drive east at sunrise, west at sunset, or in moderate rain. I haven't seen any sign of them solving that fundamental problem with vision, especially given there are existing non-vision solutions.
So there's a video of him addressing this - he doesn't hate the tech. He mentions that it's wildly expensive for cars. But, they use it heavily for SpaceX
I think they will though, I think the enormous corpus of video data and the supercluster that powers self driving development are the machine vision analog of internet scale text data that gave rise to LLMs. We'll see the same moment for vision models that text prediction models had once the data is there, where an enormous foundation model becomes much much better, especially at zero-shot tasks.
I would guess the plan is to have the foundational machine vision tech that becomes the core of robotics sensors. Not just Optimus but every robot arm in a factory, robot mule, etc. I don't think everything will have LIDAR if its proven to be unnecessary.
This is simply not true. Let's look at the best autonomous driving features available today, i.e. level 3:
Mercedes Drive Pilot: Uses a lidar (and a dummy unit) up front.
BMW Personal Pilot: Uses a lidar (and a dummy unit) up front
Honda SENSING Elite: Uses 5! lidars
They all use lidar, and some of the placement locations are downright hideous (Mercedes EQS). I think further development will require even more/better sensors, and manufacturers tend to agree on this point.
I ignore the Chinese because it is difficult to get reliable English information. Apart from those, these are the only level 3 systems available, and level 3 is the most advanced system that private individuals can currently get their hands on. Have I missed any?
It's not a benchmark, but there is a youtube channel (Out of Spec) which tests these systems, and I think they also say Mercedes are the best in their "Hogback challenge".
All of these are far less capable than FSD. They might have more advanced regulatory approval because they have strong limitations of when it can be used, but if you drive the same route and compare, its not even close.
I doubt it. Yes, FSD is more flexible and can also drive reasonably well on city streets, but there is a reason why it is not certified for level 3 on motorways. It would most likely fail certification. With a level 3 system, I can take my eyes off the road and watch a movie. Doing that with FSD, even in the best conditions, is suicidal. Level 3 vehicles must have an extremely low failure rate. Any crash would quickly be picked up by the media.
FSD is a versatile level 2 system, but at best a prototype for level 3. If we are talking about prototypes, it has to be compared to prototypes from other manufacturers like this <https://www.youtube.com/watch?v=0uSph0asNsk> fully autonomous system from ... 11 years ago. The reason FSD is available to the average consumer is mostly a matter of philosophy, not technology.
Maybe they changed their mind on it in the last 10 years. I had as the source a high-ranking BMW manager as well as an Audi one who each gave a public lecture at a university with such a statement.
If you want conventional car utilization where the car sits in a parking spot most of the time then the extra cost from the lidars is much more of an issue than if you're operating a fleet that is acting as taxis most of the day.
> have maintained that they need to find a way to do it with cheap
If the goal is to make roads safer. Aiming for cheap is good, it means aiming for more people who can afford that safer car. If it's not safer than humans, it should not be on the road in the first place.
Theoretically if a human can drive a car using a pair of eyes connected to brain, it should be possible to do that using two cameras connected to some kind of image processing unit.
> Theoretically it should be possible to do that using two cameras connected to some kind of image processing unit
That "some kind of image processing unit" in humans has an awful lot of compute power and software.
If you remove $100k of sensors but have to add $200k of compute to run more advanced computer vision software, then it's a bad tradeoff to use only cameras, even if in theory that software is possible.
Theory isn't really all that applicable to this though - in theory nothing is stopping anyone from writing all code in assembly, but obviously that doesn't happen.
I think more practically cars have adding driver assistance feature for a while now - more cameras, blind spot monitoring, ultrasound for parking, lane drift indicators.
It is therefore not unreasonable to assume that adding more sensors is helpful (but even the old adage of more data is better than less would probably say that).
To be honest, it's possible that having too much data can only cause problems in quick decision-making. Any redundant data will only slow down processing pipelines.
Is that because their vision fails to provide the information necessary to drive safely? Or is it due to distraction and/or poor judgment? I don't actually know the answer to this, but I assume distraction/judgment is a bigger factor.
I'm not a fan of the camera-only approach and think Tesla is making a mistake backing it due to path-dependence, but when we're _only_ talking about this is _broadly theoretical_ terms, I don't think they're wrong. The ideal autonomous driving agent is like a perfect monday morning quarterback who gets to look at every failure and say "see, what you should have done here was..." and it seems like it might well both have enough information and be able too see enough cases to meet some desirable standard of safety. In theory. In practice, maybe they just can't get enough accuracy or something.
According to NHTSA, about half of all fatal crashes occur at night, even though only 25% of driving happens at nighttime. So yes, we are pretty bad at this.
I totally agree, I think most accidents are caused by human nature (especially slow reaction time in specific conditions like being tired or drunk) and ignoring laws of physics (driving too fast). And some are just a pure bad luck (something/someone getting on the road right in front of the car).
If we want the sell driving computer to be only possibly as good as a human. I can't see in the dark, can't see through fog, and have trouble with rain. Why is human visibility the bar to meet here?
Because Musk thinks is much much smarter than he actually is and refuses to listen to anyone. And between how many people he fired at Twitter, Tesla, and soon the US Federal Government I think he gets off on it.
Airplanes don't flap their wings and boats don't wag their tails.
Assuming that all technology should imitate nature is a naive engineering principle. The solution should solve the problem within the given constraints.
>They only work due to a huge network of global effort.
And horses don’t need roads like cars do and cars only work thanks to a huge network of global effort. What point are you trying to make? That we abandon planes until we can develop flight as efficient as nature? Abandoning LIDAR until we can develop visual light perception and processing equal to the human eye and brain?
My back of the napkin estimate is that a human using time of flight ranging would be unable to distinguish between an object directly in front of their face and 8.6 meters away[1]. I think human echolocation uses a different mechanism (presumably relating to amplitude)?
Skimming the Wikipedia article[2], it seems like animals do use time of flight, but also Doppler shifting.
(As a side note, some animals have apparently evolved active countermeasures to echolocation!? It seems obvious in retrospect but incredibly cool.)
There's interesting research into the mechanisms of human echolocation [3], but it was over my head. My impression was that the jury is out as far as the precise mechanisms involved but that there's a lot of evidence to be considered, I'm sure someone with a better background would get more out of it than I did.
(I'm just curious about the mechanism, I agree that LIDAR has natural analogs.)
[1] Speed of sound * 25ms, 25ms being the rule of thumb I've memorized for the minimum interval for two sounds to register as distinct from each other. This is just folk wisdom I've picked up hacking on audio, so perhaps I'm mistaken.
> Messing with machines have a way lower ethical threshold for most people
We’re in the regime of throwing rocks at a freeway from a bridge to cutting brake lines as a prank. The attack surface here isn’t new and isn’t difficult to think through. Someone who can’t is going to be equally dangerous, machine or man.
I worked as a reaserch engineer at a uni playing with a 16 beam Velodyne when those were fancy.
Put it on a car for a demo day, drawing the dots in 3d and marking obstacles red, and during sun set there was artifacts with no obvious way to filter out.
Strangely, I was never able to recreate this. I think was some specific athmospheric condition.
One of the cool thing about the Waymo Driver is that it can be configured to work with different degrees of quality depending on the sensors available. In a low risk environment (e.g. closed to humans) like operating forklifts in an autonomous warehouse, it would work fine with just cameras. Waymo hasn't been very boastful to date, but some of the capabilities are hinted at in this interview: https://www.youtube.com/watch?v=d6RndtrwJKE
You can, they are just expensive (other than iphone). Maybe 8k for a handheld basic one (e.g. Trion P1), $15k for a drone attachment (e.g. DJI Zenmuse L1) - more for the ones surveyors use proper including the tripod-mount ones.
At the consumer end photogrammetry tends to just be so much cheaper that its preferred unless you really need defined accuracy at a high level of detail. Lidar tends to work currently much better in an industrial/professional context because its more accurate. Whether Lidar will make the jump to lower cost / consumer level is the big open question (and basically the same issue as for cars here)
Lookup the RPLidar family of devices. Cheap 1D, easy to work with. By 1D I mean that it measures ranges in 360degrees around the plane that it is spinning in.
I was wondering this too, although for a different use-case. A couple years ago I was walking through a field/vacant lot not far from Centralia, WA and I came across what I think is a grave.
The (supposed) "grave" was roughly human-sized and human-shaped, the ground was concave, sunken in and deepest at the center, and it was encircled with stones that were slightly larger than grapefruit.
The reason I suspect it's a grave is because I stumbled upon a very similar-looking thing at a historical site in Tooele county Utah named Mercur cemetery.
With Lidar I could prove/disprove my grave theory, correct?
I think this depends on your budget and what exactly you want to do. Do you want to scan your house from outside? Sounds expensive, would probably have to be drone-mounted, and the drone would fly around for a while (depending on the shape of the house.) Inside, and don't mind some minor inaccuracies? Not Lidar, but a Kinect from yesteryear may be enough.
That's a reasonable basic overview.
I'm surprised that rotating scanners are still used. It's been twenty years since Velodyne built their first one. They work OK, but cost too much. I was expecting flash LIDAR or MEMS mirrors to take over. Continental, the auto parts company, bought the leading flash LIDAR company over a decade ago, but the volume market a big parts company needs never appeared.
Waymo is still using rotating LIDARs even for the little ones at the vehicle corners. Those need less range. There needs to be a cheap, flush-mounted replacement for those things. The location is too vulnerable. Maybe millimeter phased array radar mounted behind Fiberglas body panels. Waymo needs to solve that problem before they do New York.
The LIDAR on top may not be a problem. Insisting that it has to go away to "look like a car" is like insisting that cars had to have the form factor of horse-propelled buggies. Early cars looked like buggies, but that didn't last.
One big advantage of pulsed LIDAR over continuous is that the interference problem between identical units is much less. The duty cycle is tiny. Data from one pulse round trip is collected in less than a microsecond. Just put some randomization in the pulse timing and getting multiple conflicts in a row goes away.
Waymo have in house radar, I think in the 70GHz gap in the absorption spectrum. They're pretty obvious as sort of paperback book sized planes, mounted near other sensors IIRC.
The old Velodyne units were actually susceptible to damage if you left two units running right next to each other. I did hear a proposal at some point for a different but similar unit to use GPS time to sync the rotations of all the units we had live so they wouldn't be pointed at each other, but in practice it seemed to not be a huge issue.
BTW I once gave you guff about continuing to bring up Conti's flash LIDAR, and in retrospect I wish I hadn't, I really enjoy your contributions here.
The SNR for flash Lidar is really low because you spread the beam out over such a large area.
Most automotive Lidar already operate in a “photon starved regime”, ~200-300 photons per return[0]. If you spread that over the entire scene, your snr drops quickly.
This forces you into 1550nm, and a large detector array and high power laser at 1550nm is extremely expensive.
As for MEMS, it’s been a while but I think FOV/steering angle range , steering speed and even maximum beam power were concerns
EDIT: my Lidar friend Jake reminded me that the appetizer size is also an issue with MEMS- smaller aperture = less light collected = lower SNR
[0] https://www.hamamatsu.com/content/dam/hamamatsu-photonics/si...
> Most automotive Lidar already operate in a “photon starved regime”, ~200-300 photons per return[0]. If you spread that over the entire scene, your snr drops quickly.
Translating: Normally you have a large single sensor per laser, which makes measurements at a very high rate. With flash lidar, you split the sensor up like an image sensor. In a normal image sensor, each pixel can collect light for a long time, but if you do that with lidar you have no distance resolution. The sensor is sitting idle 99% of the time, and you pay in sensitivity and accuracy.
Array sensors, MEMs, and phased arrays all struggle because they're all really good at small-angle differences, while the reason for scanning lidar is large-angle differences. Maybe one day we'll start making curved dies and it'll be easier to have a really wide FOV without needing multiple sensors.
You can actually make curved dies already- there’s a company doing that for image sensors, if you thin silicon down it becomes flexible
> I'm surprised that rotating scanners are still used. It's been twenty years since Velodyne built their first one.
They're even older than that. SICK have been pointing laser range finders into spinning mirrors since about 1995 - albeit mostly for industrial safety systems which can be quite price-insensitive.
There's a few things to know about LIDAR to understand why spinning lasers make sense.
First of all, anything emitting a cone of light encounters "inverse square dropoff" - where moving twice as far away means you get a quarter of the light, per unit area. This is most visible with flash photography at night - but it also applies to LIDARs. And in an automotive application, ideally you want to be able to sense things 100m away. Illuminating a laser spot is much more practical than illuminating everything.
Secondly, whatever light source you use has to be eye-safe. And sure, IR has safety advantages over visible light here - but a light source bright enough to illuminate things at a 100m distance would be very hard to make safe, even with the advantages of IR. As a scanning laser never lingers in one point for long, it can safely be much more intense.
The third thing to know is whatever light source you're using, you're in competition with the sun. Sometimes the sun is low in the sky and directly dazzling your sensors. Other times it's illuminating the same things you want to illuminate. This means you can't make up for a weak light source and inverse-square dropoff with clever signal processing.
And finally, the makers of these cars envisage a future where every single vehicle on the road is using this technology. So there's also a risk of the reflected returns of two different vehicles interfering with one another. Even rotating LIDAR can be vulnerable to it, but flash LIDAR is particularly vulnerable.
Meanwhile, automotive companies aren't scared of moving parts. A car has loads of spinning parts already; they have mastered the art of making spinning things that can keep spinning for thousands of hours.
> Meanwhile, automotive companies aren't scared of moving parts. A car has loads of spinning parts already; they have mastered the art of making spinning things that can keep spinning for thousands of hours.
Almost an understatement.
A typical car wheel hub with a 20-27 inch tire diameter has experienced around 75-100M full rotations by the time it reaches 100K miles.
Meanwhile, the engine probably has revolved ~5-10 times more during the same time.
I may be wrong, but I think the concern was delicate components protruding from the body are susceptible to damage in urban areas.
Cars already have mirrors and lights etc. LIDAR is in the same family as those.
Mirrors and lights are generally cheaper than a whole LIDAR unit when I get lightly sideswiped.
I figure if the LIDAR is mass produced on every car, it won't be much more than a headlight. (Headlight prices notwithstanding)
Continental is folding their automotive LiDAR division and is laying off everybody.
Not surprised. I don't think they sold many.
> They work OK, but cost too much.
Costs have dropped dramatically in the past 20 years and continue to do so.
> There needs to be a cheap, flush-mounted replacement for those things.
Why? Corners are the optimal mounting position for maximum visibility. It allows the car to -in-effect- see around corners in ways no centrally mounted sensor can.
> Waymo needs to solve that problem before they do New York.
What? Because of vandalism?
Have you ever seen the corners of a car that has been parked in a big East-coast city? They will sustain damage during the course of normal operation and storage, and many people will not stop and leave their insurance information, especially if the damage is perceived as minor and happens while the car is parked and the owner not present. Currently, the corners of a car are relatively non-critical to its function and usually not too expensive to repair. If both of those change, we'll see more expensive damage that is more challenging to repair as well as less likely to be handled by the responsible party.
Also, having the sensors stick out from the corners makes the car's collision box and turning radius bigger. That doesn't help in any tight situation, but I imagine that's not that different between e.g. SF and New York. What is different is the sheer volume of cars and pedestrian activity.
Right. It seems to have been Waymo's decision to have zero blind spots around the vehicle perimeter, even if that means having the sensors stick out.
Cruise had an accident where another vehicle knocked a pedestrian into a Cruise car, and the pedestrian was dragged. Cruise lost their California DMV autonomous license for that. So there's a good case for full perimeter coverage.
Humans don't have that. The same week as the Cruise incident, a NYPD tow truck dragged a pedestrian some distance because they were in a blind spot for the driver.
They lost their license for not reporting it properly (as required under the license). Not for the accident.
Did the tow truck driver lose their license?
They don't stick out that much. The geely vehicle has front sensors recessed just above the front wheel well, without much additional side clearance. Either way, a collision involves regulatory filings, downtime, and sensor recalibration even if no damage is sustained.
Waymos sometimes stop briefly in parking spots while waiting for assignments, but they don't really park as such except in special lots. The big problem I have seen is they tend not to always pull to the curb when releasing passengers and if a door is left even slightly ajar then they will sit there requesting the door be closed even if they are blocking a lane with many cars behind them beeping.
Not having a motor and thus having to depend on people to close doors on an autonomous car seems very silly.
Waymo's custom designed 6th generation vehicles[1] with self-closing doors were expected to enter service this year, but have [probably] been put on indefinite hold due to tariff issues
[1] https://waymo.com/blog/2021/12/expanding-our-waymo-one-fleet...
can't they retrofit a door closer to their current cars?
I think it’s due to how often cars bump or scratch against each other in NYC (I.e. the sensors are in a vulnerable spot to be easily damaged).
It’s quite funny seeing the number of cars that have bumper skirts in NYC to help minimize damage from inevitable close encounters with other vehicles
Here's an interesting "lidar gem" from Hacker News a few years ago:
https://news.ycombinator.com/item?id=33554679
Lidar obstacle detection algorithm from a Git repo leaked onto Tor
This is a drivable region mapping (obstacle detection) algorithm found in what appears to be a git repo leaked from an autonomous vehicle company in 2017. The repo was available through one or more Tor hidden services for several years.
The lidar code appears to be written for the Velodyne HDL-32E. It operates in a series of stages, each stage refining the output of the previous stage. This algorithm is in the second stage. It is the primary obstacle detection method, with the other methods making only small improvements.
The leaked code uses a column-major matrix of points and it explicitly handles NaNs (the no-return points). We've rewritten it to use a much more cache-efficient row-major matrix layout and a conditional that will ignore the NaN points without explicit testing.
This is an amazingly effective method of obstacle detection, considering its simplicity.
Which Tor hidden service was this?
Asking for a friend...
If it was leaked in 2017, that's when Tor hidden service v2 URLs were still in use. Meaning the site is long gone and inaccessible these days.
I worked on an automotive FMCW LiDAR that didn't quite make it to market. Cool technology but it was difficult to scale the cost down, which is pretty important for automotive. Margins are very low in that market
Are LIDAR dangerous to the eyes of other drivers or pedestrians?
1550nm LiDAR Damaged Sony Camera at CES - http://image-sensors-world.blogspot.com/2019/01/1550nm-lidar...
Thanks for sharing that, clearly there is something at least worth investigating here. The concern about long-term exposure to LiDAR beams is not extensively studied. Current regulations seem to primarily focus on short-term exposure limits, and there is a lack of comprehensive testing under real-world conditions
No, there’s a class system for laser safety
The rating is for you to stick your eyes right up to it for a long period of time and still be fine
What’s “a Long time”? Does it cover a 2 hour commute in traffic with 20+ cars around you blasting it continuously any direction you look, invisibly?
Laser safety ratings are based on what would happen if the laser was pointed directly at your eye continuously. In the case of general traffic each lidar is scanning in different direction and while manufacturers try to make the energy produced by their lasers instantaneously brighter than the sun in one specific wavelength but damage to your retina is caused by excessive heating and doesn't care about what wavelength the energy is coming in at in the IR except to the extend that it can get to the retina or not. In your morning commute I'd worry less about the lidars than the much larger amount of invisible IR radiation given off by the sun. And I'd worry much less about the sun's IR radiation than the sun's UV radiation, wearing sunglasses during a 2 hour commute is best for your eyes.
They should not be. In theory they can be, but there are strict regulations to prevent that.
What are the regulations and who are the relevant regulatory bodies? I could not find with a google search
Good question! You're right, this is surprisingly hard to Google. It looks like the FDA is responsible. I would not have guessed that!
The National Highway Traffic Safety Administration (NHTSA) would have been my guess, but I'm not finding much there. They have a spec for LIDAR speed measurement devices, and one for the required sensors in vehicles, but nothing on the the output of said sensors.
> For manufacturers of laser products, the standard of principal importance is the regulation of the Center for Devices and Radiological Health (CDRH), Food and Drug Administration (FDA) which regulates product performance. All laser products sold in the USA since August 1976 must be certified by the manufacturer as meeting certain product performance (safety) standards, and each laser must bear a label indicating compliance with the standard and denoting the laser hazard classification.
https://www.lia.org/resources/laser-safety-information/laser...
https://www.fda.gov/radiation-emitting-products/home-busines...
https://www.fda.gov/about-fda/fda-organization/center-device...
Sorry, I don’t believe the FDA is doing anything more than stamping a Class 1 or class 2 sticker on component parts. They are not testing LIDAR arrays in situ under simulated driving conditions
I would like to see crash test dummy style research around vehicular LIDAR
Crash tests in the US are also technically on the honors system too, but NHTSA does test the most common models. But many they don't. For example, the Cybertruck.
They fall under the same regulations as lasers.
Those can be gamed easily.
Please explain!
I know we are talking about car type lidar, but the iPhone Pro has a type of one and gets a depth map of photos. So you’re shooting it everyone you are taking photos of.
I don't think the Lidar in apple's stuff is actually a lidar, I think its a structured light sensor.[1]
What do I mean by that? lidar sends pulses of light and works out the difference between emission time and arrival time to work out how far the pulse has travelled.
The structured light sensor emits a pattern or dots, and any distortion of that can be used to compute the shape of an object.
[1] https://image-ppubs.uspto.gov/dirsearch-public/print/downloa...
They aren't talking about FaceID.
iPhone 12 introduced a lidar sensor in the back.
https://www.nature.com/articles/s41598-021-01763-9
Why is lidar so expensive? And still needs to be miniaturized. But time should solve those problems, there's enough engineering effort.
When you're measuring something as precise as the time it takes light to bounce off something in front of you, you need really precise optics and electronics. Also, automotive lidar is still in the realm of low-volume specialty equipment, so there are little to no economies of scale in manufacturing this stuff.
"Its particular superpower is that it can generate high resolution images of its surroundings much better than radar can."
Is this true tough? Car radars are fixed. I guess a comparable lidar would be fixed too and have n points for n lasers.
A rovolving radar would have continuous resolution around while a lidar samples?
I thought the advantage of lidars were accuracy and being better at measuring heights of objects, where as radars flatten the view.
The issue isn't one of fixed vs rotation, it's that radar can't fundamentally achieve the resolution necessary to distinguish important features in the environment. It's easily fooled by oddly-shaped objects, especially concave features like corners, and so while it's great for answer the question of "am I close to something" it's not reliable for telling you what that something is, especially at longer ranges.
I believe automotive radar has a cone of sensitivity that is read as a single "pixel" worth of data. Even if the radar spun like lidar, the radar cone of sensitivity is thousands of times wider than the lidar beam so you can't make much of a picture with radar.
IIRC the data coming out of the Conti radars was preprocessed to give bearing, distance, and size of an object in the FOV of the unit. I don't know if I ever saw the true raw data out of one of them, but I'm curious what it looks like.
Ye I have a hard time imaganing how a car radar image looks like.
On boat radars it seems like the radar have really high resolution (can see much further than lidars) but have worse accuracy. I.e. things looks like blobs.
A lidar image at 50+ meters is very sparse.
Roughly like in this paper https://www.semanticscholar.org/paper/RadarScenes:-A-Real-Wo...
Thanks. Spot on!
I'd be curious if the design of the Cybertruck affects readings at all. It's got angles straight outta an F-117.
I think "stealth" planes assumes the radar is under the plane on the ground? For the geometry. And they have some color or alloy that reflect less.
It's cooler than that these days - under the paint are antennas plated? printed? onto the skin panels that are tuned to absorb specific frequencies of interest.
I reckon it's probably not that bad, there are big surfaces that are almost normal to what would be incoming radio energy. Stealth shapes tend to reflect energy in a completely different direction from the source.
Here's the closest thing to data I've been able to find. I have no idea what to do with this info.
https://x.com/jwt0625/status/1848218860513628203
The polar plot at the end would be useful if there were plots for other cars and trucks to compare to. I'm assuming that it's a simulation?
Very high tech radars can generate amazing imagery, but they'll never top what lidar can do. Conceptually they're both doing the same sort of thing using EM radiation, but lidar uses a much smaller wavelength which gives it an intrinsic resolution advantage. Particularly at distances and with hardware sized relevant to cars.
Fantastic tech that Musk hates
In a recent No Priors podcast with the Waymo Co-CEO Dmitri Dolgov, he talks about how they evaluated just driving with cameras and how it isn't good enough for full autonomy and doesn't meet their bar for safety [1].
1: https://www.youtube.com/watch?v=d6RndtrwJKE&t=1119s
They went deep down the wrong path and need to justify their mistake. Waymo will be killed off any day now.
I find opinions like this to be almost as crazy as saying that the earth is flat because Waymo has a working, truly self-driving taxi service RIGHT FREAKING NOW while Musk is still promising to have one some day in the hazy future while NEVER making a single vehicle that can actually drive without someone in the car. Musk rejecting LIDAR means that he fundamentally doesn't understand the technological challenge of self-driving despite have access to the world's experts OR he is cynically using false promises of self-driving to pump up Tesla share price. I know which one I think is true.
I think anyone who listens to Musk talking about something they themselves know a lot about quickly realises that Musk's skills are elsewhere. He can motivate and market the hell out of a business whilst snorting more ketamine than a herd of horses but he is not a technical genius by any means. He pays people well to agree with him and fires them when they don't, so I suspect that his companies that produce better and more stable products do so because he micromanages them less.
It's weird that what he does is so easy yet no one else is making EVs at scale in USA, or landing rockets, 10 years after SpaceX did it
Karpathy said in some podcast that Tela uses LIDAR in training, and by doing this they can get a lot of the benefits. Not sure that all off the "worlds experts" agree with you that you HAVE to use LIDAR. Rate of progress for FSD has been very impressive lately. I personally think that its very plausible that Tesla might beat Waymo to large scale location independent autonomous driving.
The stats on the latest FSD are still terrible. It still needs human intervention far too frequently and is no where near being able to run without a human in the car or Tesla accepting liability for crashes.
Waymo's recent experiment with multimodal models and a purely camera based system (EMMA) validate some of the claims that using LIDAR data in training does help. Pretty neat! Still not as good as a LIDAR + RADAR based system.
It doesn't. It has a party trick that works in very specific conditions.
At least it works. Meanwhile Tesla have nothing to show, even in "very specific conditions".
Do you expect a car with fewer sensors to fare any better soon?
But it works vastly better than anything Tesla has made so what does that say about Tesla?
From person experience, the state of the art Tesla vision FSD still can't drive east at sunrise, west at sunset, or in moderate rain. I haven't seen any sign of them solving that fundamental problem with vision, especially given there are existing non-vision solutions.
That's a bold claim. Care to justify it?
Yeah, it only works in extremely controlled environments driving really slowly.
The design is also flawed as it has to work with cameras anyway. The last thing you want is two systems arguing over what they see.
Waymos don’t drive slowly, I don’t know where you’re getting this from. If anything, they drive too fast for a thing without a driver.
Extremely controlled environments like the entire city of San Francisco?
Sensor fusion is a thing. There are no two systems that “argue with each other”. I can’t believe the same old ignorant tropes are still making rounds.
It doesn't have to be an argument. You know what each system is good at and prioritize inputs accordingly.
Google killing off Waymo by giving them $5.6B just a few weeks ago!
What do they actually use that much money for?
New vehicles and setting up depot operations.
So there's a video of him addressing this - he doesn't hate the tech. He mentions that it's wildly expensive for cars. But, they use it heavily for SpaceX
The issue isn't that it's wildly expensive for cars. But rather for Tesla.
Because the company has promised that existing Tesla owners would be able to use FSD.
Having to retrofit them to add LiDAR sensors would be cost-prohibitive.
Also he wants to reuse the foundational machine vision tech in Optimus bot, which probably won't have lidar.
Based on presentations we've seen what sets Tesla apart are its datasets not the core technology.
And those don't translate across to the Optimus bot.
I think they will though, I think the enormous corpus of video data and the supercluster that powers self driving development are the machine vision analog of internet scale text data that gave rise to LLMs. We'll see the same moment for vision models that text prediction models had once the data is there, where an enormous foundation model becomes much much better, especially at zero-shot tasks.
Optimus should probably have LiDAR more than a car…
I would guess the plan is to have the foundational machine vision tech that becomes the core of robotics sensors. Not just Optimus but every robot arm in a factory, robot mule, etc. I don't think everything will have LIDAR if its proven to be unnecessary.
It‘s not just Musk. Most automobile manufacturers have maintained that they need to find a way to do it with cheap and pretty sensors.
This is simply not true. Let's look at the best autonomous driving features available today, i.e. level 3:
Mercedes Drive Pilot: Uses a lidar (and a dummy unit) up front.
BMW Personal Pilot: Uses a lidar (and a dummy unit) up front
Honda SENSING Elite: Uses 5! lidars
They all use lidar, and some of the placement locations are downright hideous (Mercedes EQS). I think further development will require even more/better sensors, and manufacturers tend to agree on this point.
Chinese OEMs (BYD, Xaomi, Nio) use lidar in almost all of their mid to premium segments. Also, Polestar 3.
How well do they work? Camera only systems can be easily blinded by sun, fog, dirt, and snow
What are the benchmarks that say Mercedes, BMW, and Honda have the best level 3 features.
I ignore the Chinese because it is difficult to get reliable English information. Apart from those, these are the only level 3 systems available, and level 3 is the most advanced system that private individuals can currently get their hands on. Have I missed any?
It's not a benchmark, but there is a youtube channel (Out of Spec) which tests these systems, and I think they also say Mercedes are the best in their "Hogback challenge".
https://www.youtube.com/watch?v=xK3NcHSH49Q&list=PLVa4b_Vn4g...
Worth checking out, many cars are very bad.
Sponsored by Magna, probably the contractor selling them the system...
Don't forget Blue Cruise from Ford.
Blue Cruise is level 2+, not 3, and does not rely on lidar.
All of these are far less capable than FSD. They might have more advanced regulatory approval because they have strong limitations of when it can be used, but if you drive the same route and compare, its not even close.
I doubt it. Yes, FSD is more flexible and can also drive reasonably well on city streets, but there is a reason why it is not certified for level 3 on motorways. It would most likely fail certification. With a level 3 system, I can take my eyes off the road and watch a movie. Doing that with FSD, even in the best conditions, is suicidal. Level 3 vehicles must have an extremely low failure rate. Any crash would quickly be picked up by the media.
FSD is a versatile level 2 system, but at best a prototype for level 3. If we are talking about prototypes, it has to be compared to prototypes from other manufacturers like this <https://www.youtube.com/watch?v=0uSph0asNsk> fully autonomous system from ... 11 years ago. The reason FSD is available to the average consumer is mostly a matter of philosophy, not technology.
Maybe they changed their mind on it in the last 10 years. I had as the source a high-ranking BMW manager as well as an Audi one who each gave a public lecture at a university with such a statement.
If you want conventional car utilization where the car sits in a parking spot most of the time then the extra cost from the lidars is much more of an issue than if you're operating a fleet that is acting as taxis most of the day.
> have maintained that they need to find a way to do it with cheap
If the goal is to make roads safer. Aiming for cheap is good, it means aiming for more people who can afford that safer car. If it's not safer than humans, it should not be on the road in the first place.
Theoretically if a human can drive a car using a pair of eyes connected to brain, it should be possible to do that using two cameras connected to some kind of image processing unit.
> Theoretically it should be possible to do that using two cameras connected to some kind of image processing unit
That "some kind of image processing unit" in humans has an awful lot of compute power and software.
If you remove $100k of sensors but have to add $200k of compute to run more advanced computer vision software, then it's a bad tradeoff to use only cameras, even if in theory that software is possible.
In theory. In practice neither the cameras nor processors available in cars function anywhere near human level.
Theory isn't really all that applicable to this though - in theory nothing is stopping anyone from writing all code in assembly, but obviously that doesn't happen.
I think more practically cars have adding driver assistance feature for a while now - more cameras, blind spot monitoring, ultrasound for parking, lane drift indicators.
It is therefore not unreasonable to assume that adding more sensors is helpful (but even the old adage of more data is better than less would probably say that).
To be honest, it's possible that having too much data can only cause problems in quick decision-making. Any redundant data will only slow down processing pipelines.
In practice humans aren't particularly safe drivers.
Is that because their vision fails to provide the information necessary to drive safely? Or is it due to distraction and/or poor judgment? I don't actually know the answer to this, but I assume distraction/judgment is a bigger factor.
I'm not a fan of the camera-only approach and think Tesla is making a mistake backing it due to path-dependence, but when we're _only_ talking about this is _broadly theoretical_ terms, I don't think they're wrong. The ideal autonomous driving agent is like a perfect monday morning quarterback who gets to look at every failure and say "see, what you should have done here was..." and it seems like it might well both have enough information and be able too see enough cases to meet some desirable standard of safety. In theory. In practice, maybe they just can't get enough accuracy or something.
> Is that because their vision fails to provide the information necessary to drive safely?
In certain conditions, yes. Humans drive terribly in dark and low light, something lidar excels in.
Still, millions of humans drive every night and only a miniscule percentage cause any accidents. So maybe we are not so bad at this.
According to NHTSA, about half of all fatal crashes occur at night, even though only 25% of driving happens at nighttime. So yes, we are pretty bad at this.
I totally agree, I think most accidents are caused by human nature (especially slow reaction time in specific conditions like being tired or drunk) and ignoring laws of physics (driving too fast). And some are just a pure bad luck (something/someone getting on the road right in front of the car).
Sure, but why strive for that? We can have better than human perception by adding lidar and radar.
If we want the sell driving computer to be only possibly as good as a human. I can't see in the dark, can't see through fog, and have trouble with rain. Why is human visibility the bar to meet here?
Oh and the sun. I get blinded when the sun is in my eyes at sunrise and sunset.
And how many car accidents did you cause in your life? Probably still no a lot even with your flawed vision.
Imagine that same reasoning applied to the car itself. Ugh, wheels?? Humans get around just fine bipedally, so cars should have legs too.
Explains the Tesla robot actually
Because Musk thinks is much much smarter than he actually is and refuses to listen to anyone. And between how many people he fired at Twitter, Tesla, and soon the US Federal Government I think he gets off on it.
Musk has said several times Lidar is great. It's just a stupid idea for automotive use and he's not wrong.
There's nothing similar in nature for a reason.
Airplanes don't flap their wings and boats don't wag their tails.
Assuming that all technology should imitate nature is a naive engineering principle. The solution should solve the problem within the given constraints.
Nature came up with something much better in both those cases.
Portable, energy efficient, light, doesn't need refined oil, tightly steers...
Boats and aeroplanes are terrible in comparison. They only work due to a huge network of global effort.
>They only work due to a huge network of global effort.
And horses don’t need roads like cars do and cars only work thanks to a huge network of global effort. What point are you trying to make? That we abandon planes until we can develop flight as efficient as nature? Abandoning LIDAR until we can develop visual light perception and processing equal to the human eye and brain?
Time of flight ranging is used in nature by bats and whales/dolphins.
Both primarily use their eyes. Look it up.
and humans! https://www.atlasobscura.com/articles/how-to-echolocate
My back of the napkin estimate is that a human using time of flight ranging would be unable to distinguish between an object directly in front of their face and 8.6 meters away[1]. I think human echolocation uses a different mechanism (presumably relating to amplitude)?
Skimming the Wikipedia article[2], it seems like animals do use time of flight, but also Doppler shifting.
(As a side note, some animals have apparently evolved active countermeasures to echolocation!? It seems obvious in retrospect but incredibly cool.)
There's interesting research into the mechanisms of human echolocation [3], but it was over my head. My impression was that the jury is out as far as the precise mechanisms involved but that there's a lot of evidence to be considered, I'm sure someone with a better background would get more out of it than I did.
(I'm just curious about the mechanism, I agree that LIDAR has natural analogs.)
[1] Speed of sound * 25ms, 25ms being the rule of thumb I've memorized for the minimum interval for two sounds to register as distinct from each other. This is just folk wisdom I've picked up hacking on audio, so perhaps I'm mistaken.
[2] https://en.wikipedia.org/wiki/Animal_echolocation
[3] https://durham-repository.worktribe.com/preview/1375913/1963...
2024 https://news.ycombinator.com/item?id=42160071
2018 https://news.ycombinator.com/item?id=18208334
Rotorwings are also not found in the nature yet they give us ability to navigate in a short distance 3D space better than fixed wing.
Bats kinda have Lidar.
Echo location but they still mostly use their eyes. Same as dolphins
Nature makes for bad drivers. for some age groups cars are the largest causeof death. I self driving can do better.
LIDARs can be blinded by consumer grade laser pointers, I wonder if there are systems that protect LIDARs against adversarial attacks or DOS attacks
Drivers can also be blinded by consumer grade laser pointers.
If someone starts attacking safety systems physically I would expect they will get quite a bit of jail time.
Messing with machines have a way lower ethical threshold for most people.
Like fooling vending machines to give you soda. Nothing 'my teenage friend' lost sleep over.
> Messing with machines have a way lower ethical threshold for most people
We’re in the regime of throwing rocks at a freeway from a bridge to cutting brake lines as a prank. The attack surface here isn’t new and isn’t difficult to think through. Someone who can’t is going to be equally dangerous, machine or man.
I don't think e.g. 'kids' will see it like cutting a brake line.
I agree, as a former ECU programmer, though. I am terrified of drive by wire.
More like making the Google computer do silly things.
More comparable would be the prank of tying some soda tin cans in a string to the exhaust etc, in how I believe my scapegoat kids would see it.
There are also (stupid) people who like to point laser pointers to helicopter pilots...
They can be blinded by the sun too.
I worked as a reaserch engineer at a uni playing with a 16 beam Velodyne when those were fancy.
Put it on a car for a demo day, drawing the dots in 3d and marking obstacles red, and during sun set there was artifacts with no obvious way to filter out.
Strangely, I was never able to recreate this. I think was some specific athmospheric condition.
Related: https://www.viksnewsletter.com/p/teslas-big-bet-cameras-over...
Waymo tried cameras-only recently as a research project.[1][2] They seem to do about as well as Tesla, which they don't consider good enough.
[1] https://www.forbes.com/sites/bradtempleton/2024/10/30/waymo-...
[2] https://arxiv.org/pdf/2410.23262
One of the cool thing about the Waymo Driver is that it can be configured to work with different degrees of quality depending on the sensors available. In a low risk environment (e.g. closed to humans) like operating forklifts in an autonomous warehouse, it would work fine with just cameras. Waymo hasn't been very boastful to date, but some of the capabilities are hinted at in this interview: https://www.youtube.com/watch?v=d6RndtrwJKE
Is there a lidar unit I can take home and scan my house at high resolution (than iphone)?
iPhone pro with LiDAR and Scaniverse or Polycam free apps.
You can, they are just expensive (other than iphone). Maybe 8k for a handheld basic one (e.g. Trion P1), $15k for a drone attachment (e.g. DJI Zenmuse L1) - more for the ones surveyors use proper including the tripod-mount ones.
At the consumer end photogrammetry tends to just be so much cheaper that its preferred unless you really need defined accuracy at a high level of detail. Lidar tends to work currently much better in an industrial/professional context because its more accurate. Whether Lidar will make the jump to lower cost / consumer level is the big open question (and basically the same issue as for cars here)
iphone has actual lidar not photogrammetry right?
Yes, the iPhone has a laser array and sensor that it uses for lidar.
Lookup the RPLidar family of devices. Cheap 1D, easy to work with. By 1D I mean that it measures ranges in 360degrees around the plane that it is spinning in.
I was wondering this too, although for a different use-case. A couple years ago I was walking through a field/vacant lot not far from Centralia, WA and I came across what I think is a grave.
The (supposed) "grave" was roughly human-sized and human-shaped, the ground was concave, sunken in and deepest at the center, and it was encircled with stones that were slightly larger than grapefruit.
The reason I suspect it's a grave is because I stumbled upon a very similar-looking thing at a historical site in Tooele county Utah named Mercur cemetery.
With Lidar I could prove/disprove my grave theory, correct?
I think Ground-penetrating radar (GPR) would be better usage for this grave scenario vs LIDAR.
I think this depends on your budget and what exactly you want to do. Do you want to scan your house from outside? Sounds expensive, would probably have to be drone-mounted, and the drone would fly around for a while (depending on the shape of the house.) Inside, and don't mind some minor inaccuracies? Not Lidar, but a Kinect from yesteryear may be enough.