• 0 Posts
  • 313 Comments
Joined 10 months ago
cake
Cake day: December 29th, 2023

help-circle
  • Their stance is that by using lidar OEMs are hamstringing themselves on solving vision because they are so reliant on it.

    i get that… but… vision is kinda shit. why not use all the tools at your disposal? like literally “x ray vision” is something that we see as a super power because it’d be so useful - radar gives us that

    vision is an approximation of things like lidar. can you get a depth map out of vision? sure by why not just measure it directly and then you’re not introducing error by your model literally hallucinating

    The more sensors you deal with, the more your attention gets divided. You aren’t laser focused on one thing.

    kinda but also the last 20% takes 80% of the effort… solving a lot of easy problems with more information will lead to a better short term outcome, and then when you’re getting good results then you can solve from 80% to 85% then 85 to 90 etc across your whole sensor suite

    The extra sensors also cost a lot of money

    so they though? you can buy hobbyist ultrasonic sensors for literally a couple of bucks, lidar for a few hundred - sure that’s not at the grade that you’d use for cars, but at some point it’s an economies of scale problem. they’re not actually that expensive for a commodity “good enough” sensor package

    You might not like the reasons, or their stance

    correct - i understand them, but as an engineer it’s just wrong when you’re talking about one of the most dangerous activities that humanity collectively engages in (driving)

    What happens when they keep seeing improvements in vision and now radar isn’t needed?

    i think this could be the sticking point - i don’t think any extra sensors are needed, just like i don’t think seatbelts or air bags etc are needed… but… they’re helpful and improve the safety of people in and around the car

    all the crazy headlines you see about it are idiots in cars being idiots

    agree, and i totally think driverless is the way to go - humans are far worse drivers than machines are right now without any improvement

    … however, better isn’t perfect, and when it comes to safety simply ignoring tools because of some belief that eventually it’ll be fine is misguided at best, and negligent at worst

    If people wanna blame Elon for convincing people to be idiots, sure, you can do that

    absolutely that too! their systems aren’t “drives itself no problemo” and that’s how they’re marketing it


  • I feel (and I’m no doctor) was that it was already too late by visit 3.

    perhaps, but even the other visits it seems the doctors were cagey around pregnancy - that’s what this kind of law does - it dissuades doctors from considering things because they’re worried about repercussions

    if the first 2 doctors had come to the conclusion that it was pregnancy related sepsis and that abortion is the only option, well now they’re in a real hard position - to let the patient get worse and worse in front of them and then likely take all the blame when things go downhill FAST? or “misdiagnose” and send her on her way for someone else to deal with?

    the first is a lot of personal risk; the 2nd is minimal risk… is it selfish? absolutely! but humans act selfishly - thats just how we’re wired, and laws can’t just decide to make people act differently


  • i don’t think anyone is relying solely on radar - that’s the point. every sensor we have as fallible in some way (and so, btw, are our eyes - they can’t see through things but radar can in some cases!)

    even if you CAN rely solely on vision, why hamstring yourself? with a whole sensor package, the algorithms know when certain sensors are useless - that’s what the training is for… knock 1 out, the others see that it’s in X condition and works around it

    if you only have a single sensor (like cameras) then if something happens you have 0 sensors… our eyes are MUCH better at vision than cameras - just the dynamic range alone, let alone the “resolution”… and that’s not even getting into, as others have said, the fact that our brains have had millions of years of evolution to process images.

    the technology for vision only just isn’t there yet. that’s just straight up fact. can it be? perhaps, but “perhaps in the future” is not “we should do this now”. that’s called a beta test, and you’re playing with human lives not just UI bugs - and there’s no good reason… just add extra sensors




  • the large majority of current self driving cars have radar, lidar, ultra sonic, and cameras. their detection sets overlap, and complement each other so they can see a wide array of things that others can’t. focusing on 1 and saying “it doesn’t see X” is a very poor argument when others see those things just fine



  • Though he had already performed an ultrasound, he was asking for a second.

    The first hadn’t preserved an image of Crain’s womb in the medical record. …

    The state’s laws banning abortion require that doctors record the absence of a fetal heartbeat before intervening with a procedure that could end a pregnancy. Exceptions for medical emergencies demand physicians document their reasoning. “Pretty consistently, people say, ‘Until we can be absolutely certain this isn’t a normal pregnancy, we can’t do anything, because it could be alleged that we were doing an abortion,’” said Dr. Tony Ogburn, an OB-GYN in San Antonio.

    the delays at the 3rd hospital were almost entirely attributable to Texas abortion law.

    the problem with blaming doctors for fobbing off “hard cases that they simply don’t want to deal with” as you put it, is that they shouldn’t be hard cases - they have to think about more than what’s good for the patient, and that’s kinda ridiculous








  • 2 pass will encode a file once, and store a log of information about what it did… then on the 2nd pass it’ll use that information to better know when it should use more or less bitrate/keyframes - honestly i’m not too sure of the specifics here

    now, it’s most often used to keep a file to a particular file size rather than increasing quality per se, but id say keeping a file to a particular size means you’re using the space that you have effectively

    looks like with ffmpeg you do need to run it twice - there’s a log option

    i mostly export from davinci resolve so i’m not too well versed in ffmpeg flags etc

    doing a little more reading it seems the consensus is that spending more time on encoding (ie a higher preset) will likely give a better outcome than 2 pass unless you REALLY care about file size (like the file MUST be less than, but as close to 100mb)



  • if you’re planning on editing it, you can record in a very high bitrate and re-encode after the fact… yes, re-encoding looses some quality, however you’re likely to end up with a far better video if you record and 2x the h264 bitrate and then re-encode to your final h265 (or av1) bitrate than if you just record straight to h264 at your final bitrate

    another note on this: lots of streaming stuff will say to use CBR (constant bitrate), which is true for streaming, however i think probably for re-encode VBR (variable bitrate) with multi-pass encode will give a good trade-off - CBR for live because the encoding software can’t predict what’s coming up, but when you have a known video it can change bitrate up and down because it knows when it’ll need higher bitrate