AI

I’ll probably do a deep dive into understanding AI in like, March/April 2020, because work is seasonal for me. Here’s what I know now.

First. I suspect even without having done a deep dive, that the frame problem is a big fucking thing. Like, “Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian” gets a few things wrong for want of just not knowing the field, but it’s right that, 1. a lot of the info on how to solve problems is contained in framing them in a useful way, 2. AI that attempts to model the world in an explicit way (logic, bayes) seems to inevitably fail even more than usual when it comes to solving or dissolving the frame problem, 3. I don’t know how to solve the frame problem, but singular ML systems aren’t getting close either, living organisms (like, on the level of spiders) seem way the hell closer, and 4. the way living organisms see the world does seem “Heideggerian”, in the sense of, “I have a sense for things I can just go and do to change the world, and these things are raised into my awareness when they’re plausibly useful”, e.g. action-possibilities (my wording).

(Probably most humans see the world more in terms of what is socially acceptable, if they’re being a “good person” right now, if they’re feeling emotional pain right now, etc, as I used to, and most e.g. spiders or salamanders or whatever see it more in terms of action-possibilities, like I do now. Go for the latter if you’re programming something).

Second. Stop thinking that just using the same techniques (ML) in isolation with more intensity will result in breakthroughs with respect to what classes of things you can do.

I had an important third thing to say, but seem to have forgotten?

Well, that’s all I know for now.

Next Post: Factory Farming