• 1 Post
  • 17 Comments
Joined 1 year ago
cake
Cake day: August 10th, 2023

help-circle

  • You’re correct to identify that your position is inconsistent - (A) not wanting the innocent to be wrongly executed and (B) wanting the option to enact retributive punishment against certain offenders.

    Let’s analyze these two imperatives:

    The benefits of (A) are quite self evident. It’s bad to execute people for no reason. It’s maybe the most brutal and terrifying thing the state can do to a person. And where there exists capital punishment, it happens with non-zero probability.

    The benefits of (B) are that you get a nice bellyfeel that you’ve set the universe into karmic alignment. Since there’s no evidence that capital punishment has a deterrent effect on crime (this can be proven by comparison of statistics between states/countries with capital punishment and without), this is really the ONLY benefit of position (B).

    So if you want to prioritize what’s best overall for reducing harm in society, then select (A). If you enjoy appointing yourself the moral arbiter of karma by enforcing who “deserves” to live and die (and killing some innocent people is a price worth paying), then select (B).

    Simples!













  • I get that, but what I’m saying is that calling deep learning “just fancy comparison engine” frames the concept in an unnecessarily pessimistic and sneery way. It’s more illuminating to look at the considerable mileage that “just pattern matching” yields, not only for the practical engineering applications, but for the cognitive scientist and theoretician.

    Furthermore, what constitutes being “actually creative”? Consider DeepMind’s AlphaGo Zero model:

    Mok Jin-seok, who directs the South Korean national Go team, said the Go world has already been imitating the playing styles of previous versions of AlphaGo and creating new ideas from them, and he is hopeful that new ideas will come out from AlphaGo Zero. Mok also added that general trends in the Go world are now being influenced by AlphaGo’s playing style.

    Professional Go players and champions concede that the model developed novel styles and strategies that now influence how humans approach the game. If that can’t be considered a true spark of creativity, what can?


  • To counter the grandiose claims that present-day LLMs are almost AGI, people go too far in the opposite direction. Dismissing it as being only “line of best fit analysis” fails to recognize the power, significance, and difficulty of extracting meaningful insights and capabilities from data.

    Aside from the fact that many modern theories in human cognitive science are actually deeply related to statistical analysis and machine learning (such as embodied cognition, Bayesian predictive coding, and connectionism), referring to it as a “line” of best fit is disingenuous because it downplays the important fact that the relationships found in these data are not lines, but rather highly non-linear high-dimensional manifolds. The development of techniques to efficiently discover these relationships in giant datasets is genuinely a HUGE achievement in humanity’s mastery of the sciences, as they’ve allowed us to create programs for things it would be impossible to write out explicitly as a classical program. In particular, our current ability to create classifiers and generators for unstructured data like images would have been unimaginable a couple of decades ago, yet we’ve already begun to take it for granted.

    So while it’s important to temper expectations that we are a long way from ever seeing anything resembling AGI as it’s typically conceived of, oversimplifying all neural models as being “just” line fitting blinds you to the true power and generality that such a framework of manifold learning through optimization represents - as it relates to information theory, energy and entropy in the brain, engineering applications, and the nature of knowledge itself.



  • The real problem is folks who know nothing about it weighing in like they’re the world’s foremost authority. You can arbitrarily shuffle around definitions and call it “Poo Poo Head Intelligence” if you really want, but it won’t stop ignorance and hype reigning supreme.

    To me, it’s hard to see what cowtowing to ignorance by “rebranding” this academic field would achieve. Throwing your hands up and saying “fuck it, the average Joe will always just find this term too misleading, we must use another” seems defeatist and even patronizing. Seems like it would instead be better to try to ensure that half-assed science journalism and science “popularizers” actually do their jobs.


  • Calling someone “stupid” or “dumb” is all too common, especially online in places like Reddit and Twitter. I think it is a lazy and vacuous statement, or at best just a way to vent frustration.

    It’s much better, and more constructive, to be specific about what you find reprehensible. It could be that they have horrible morals, and calling them stupid is like a shorthand for saying that they are unable to reason through towards a consistent and correct set of moral principles. Or it could be that they have been indoctrinated into nasty world-views, and that their “stupidity” is exhibited as a failure to protect themselves from the indoctrination or escape it. Or they could be deliberately hurtful trolls who say outrageous and inflammatory things to upset others, in which case their “stupid” behavior is most likely an outward-facing reaction to some trauma in their own lives. Or maybe they are just sadistic, which warrants being called out specifically, and not just attributed to stupidity. A lot of anti-intellectual posturing seems to come from some combination of these causes.

    Anyway, I feel like being specific about your criticisms not only promotes compassion (which is ultimately most likely to win over those we disagree with) but also prompts you to more thoughtfully reflect on your own positions.