Ask HN: Will there ever be a resurgence of interest in symbolic AI?

https://news.ycombinator.com/item?id=19713791

Ebay Products

Employee of Cycorp here. Aside from the current ML hype-train (and the complementary unfashionability of symbolic AI), I think the reason symbolic AI doesn’t get as much attention is that it’s much more “manual” in a lot of ways. You get more intelligent results, but that’s because more conscious human thought went into building the system. As opposed to ML, where you can pretty much just throw data at it (and today’s internet companies have a lot of data). Scaling such a system is obviously a major challenge. Currently we support loading “flat data” from DBs into Cyc – the general concepts are hand-crafted and then specific instances are drawn from large databases – and we hope that one day our natural language efforts will enable Cyc to assimilate new, more multifaceted information from the web on its own, but that’s still a ways off.

I (and my company) believe in a hybrid approach; it will never be a good idea to use symbolic AI for getting structured data from speech audio or raw images, for example. But once you have those sentences, or those lists of objects, symbolic AI can do a better job of reasoning about them. Pairing ML and symbolics, they can cover each other’s weaknesses.

I’ve been following Cyc since the Lenat papers in the 80s. Wondering what happened to OpenCyc, if you guys changed your thinking about the benefits of an open ecosystem, and if there’s any future plans there?

I’ve only been here for a couple years, so my perspective on that is limited. My understanding is that we still have some form of it available (I believe it’s now called “ResearchCyc”), but there isn’t a lot of energy around supporting it, much less promoting it.

As to why that is, my best guess is a combination of not having enough man-hours (we’re still a relatively small company) and how difficult it has historically been for people to jump in and play with Cyc. There could also be a cultural lack of awareness that people still have interest in tinkering with it, which is something I’ve thought about bringing up for discussion.

As to the accessibility issue, that’s been one of our greatest hurdles in general, and it’s something we’re actively working on reducing. The inference engine itself is something really special, but in the past most of our contracts have been pretty bespoke; we essentially hand-built custom applications with Cyc at their core. This isn’t because Cyc wasn’t generic enough, it’s because Cyc was hard enough to use that only we could do it. We’re currently working to bridge that gap. I’m personally part of an effort to modernize our UIs/development tools, and to add things like JSON APIs, for example. Others are working on much-needed documentation, and on sanding off the rough edges to make the whole thing more of a “product”. We also have an early version of containerized builds. Currently these quality-of-life improvements are aimed at improving our internal development process, but many of them could translate easily to opening things up more generally in the future. I hope we do so.

There’s an official statement of sorts here: https://www.cyc.com/opencyc/

That meshes with what I’ve heard at conferences, that Cyc management was worried people were treating OpenCyc as an evaluation version of Cyc, even though it was significantly less capable, and using its capabilities to decide whether to license Cyc or not. The new approach seems to be that you can get a free version of Cyc (the full version) for evaluation or research purposes, and the open-source version was discontinued.

What kind of experiments have you guys done that combine symbolic and statistical/ML methods? It sounds like an area ripe for research

I’m not an expert on this, but here’s my current understanding:

Symbolic reasoning/AI is fantastic when you have the right concepts/words to describe a domain. Often, the hard (“intelligent”) work of understanding a domain and distilling its concepts need to be done by humans. Once this is done, it should in principle be feasible to load this “DSL” into a symbolic reasoning system, to automate the process of deduction.

The challenge is, what happens when you don’t have an appropriate distillation of a complex situation? In the late eighties and early nineties, Rodney Brooks and others [1] wrote a series of papers [2] pointing out how symbols (and the definiteness they entail) struggle with modeling the real world. There are some claimed relations to Heideggerian philosophy, but I don’t grok that yet. The essential claim is that intelligence needs to be situated (in the particular domain) rather than symbolic (in an abstract domain). The “behavior driven” approach to robotics is stems from that cauldron.

[1]: Authors I’m aware of include Philip Agre, David Chapman, Pattie Maes, and Lucy Suchman.

[2]: For a sampling, see the following papers and related references: “Intelligence without reason”, “Intelligence without representation”, “Elephants don’t play chess”.

The essential claim is that intelligence needs to be situated (in the particular domain) rather than symbolic (in an abstract domain).

I think there is something (a lot) to this. Consider how much of our learning is experiential, and would be hard to put into a purely abstract symbol manipulating system. Taking “falling down” for example. We (past a certain age) know what it means to “fall”, because we have fallen. We understand the idea of slipping, losing your balance, stumbling, and falling due to the pull of gravity. We know it hurts (at least potentially), we know that skinned elbows, knees, palms, etc. are a likely consequence, etc. And that experiential learning informs our use of the term “fall” in metaphors and analogies we use in other domains (“the market fell 200 points today, on news from China…”) and so on.

This is one reason I like to make a distinction between “human level” intelligence and “human like” intelligence. Human level intelligence is, to my way of thinking, easier to achieve, and has arguably already been achieved depending on how you define intelligence. But human like intelligence, that features that understanding of the natural world, some of what we call “common sense”, etc., seems like it would be very hard to achieve without an intelligence that experiences the world like we do.

Anyway, I’m probably way off on a tangent here, since I’m really talking about embodiment, which is related to, but not exact the same as, situated-ness. But that quote reminded me of this line of thinking for whatever reason.

I’m not into AI, but it’s been a while that from hearing of it I’ve been percieving that there’s quite a gap between AI and human intelligence, which is embodied cognition. It appears to me that human reasoning concepts are vastly sized and paced by the physical and biological world, while this information is not accessible to a highly computational AI.

E.g. human sizing of time is highly linked to physiological timing, may it only be heartbeat pace.
More generally, all emotional input can gear reasoning (emotional intelligence).

Only my 2c on this. Not sure how accurate it is.

I think the opposite is true. Humans think in terms of symbols to model the world around him. A child is born knowing nothing, a completely blank slate, and slowly he learns about his surroundings. He discovers he needs food, he needs to be protected and cared for. He discovers he doesnt like pain. If you talk to a 3 year old child you can have a fairly intelligent conversation about his parents, about his sense of security because this child has built a mental model of the world as a result of being trained by his parents. This kind of training requires context and crossreferencing of information which can only be done by inferencing. You cant train a child by flashing 10,000 pictures at him because pictures are not experience, even adults can be fooled by pictures which are only a 2D representation of 3D concepts of 3D space. So all these experiences that a small child has of knowing about the world come to him symbolically, these symbols model the world and give even a small child the ability to reason about external things and classify them. This is human level intelligence.

Human like intelligence is training a computer to recognize pixel patterns in images so it can make rules and inferences about what these images mean. This is human like intelligence as the resulting program can accomplish human like tasks of recognition without the need for context on what these images might mean. But there is no context involved about any kind of world, this is pure statistical training.

> Humans think in terms of symbols to model the world around him. A child is born knowing nothing, a completely blank slate, and slowly he learns about his surroundings.

Actually, the research has found that new born infants can perceive all sorts of things, like human faces and emotional communication. There is also a lot of inborn knowledge about social interactions and causality. The embodied cognition idea is looking at how we experience all that.

By the way, Kant demonstrated a couple of centuries ago that the blank slate idea was unworkable.

Please check out the Genesis Group at MIT’s CSAIL. Or Patrick Winston’s Strong Story Hypothesis. Or Bob Berwick. Many at MIT are still working through the 80’s winter, without the confirmation bias of Minsky and Papert’s Perceptrons with all the computation power and none of the theory (now called neural nets). Or any of the papers here: https://courses.csail.mit.edu/6.803/schedule.html

Or the work of Paul Werbos, the inventor of backpropagation, was heavily influenced by — though itself perhaps outside the cannon of — strictly symbolic approaches

Aren’t Mathematica and automated proving systems succesful cases where symbolic AI happen?

Hybrid approaches have been getting some interesting results lately[0], and will probably continue to do so, but the approaches between statistical and symbolic AI are so different that these are essentially cross-disciplinary collaborations (and each hybrid system I’ve seen is essentially a one-off that occupies a unique local maxima).

I suspect that eventually there will be an “ImageNet Moment” of sorts starring a statistical/symbolic hybrid system and we’ll see an explosion of interest in a family of architectures (but it hasn’t happened yet).

[0] http://news.mit.edu/2019/teaching-machines-to-reason-about-w…

A lot of what’s been going on in the PL community would have been called “symbolic AI” in the 80’s. Program synthesis, symbolic execution, test generation, many forms of verification — all involving some kind of SAT or constraint-solving.

Let’s see.

Databases. (Isn’t Terry Winograd’s SHDRLU conversation that kind of conversation that you have with the SQL monitor?) Compilers. (e.g. programming languages use theories developed to understnad human languages) Business Rules Engines. SAT/SMT Solvers. Theorem proving.

There is sorta this unfair thing that once something becomes possible and practical it isn’t called A.I. anymore.

The big win in symbolic AI has been in theorem proving. In spaces which do have a formal structure underneath, that works well. In the real world, not so much.

I expect hybrid deep learning and symbolic AI systems to be highly relevant. My background which is what I base the following opinions on: I spent the 1980s mostly doing symbolic AI except for 2 years of neural networks (wrote first version of SAIC Ansim neural network library, supplied the code for a bomb detector we did for the FAA, on DARPA neural net advisory panel for a year). For the last 6 years, just about 100% all-in working with deep learning.

My strong hunch is that deep learning results will continue to be very impressive and that with improved tooling basic applications of deep learning will become fairly much automated, so the millions of people training to be deep learning practitioners may have short careers; there will always be room for the top researchers but I expect model architecture search, even faster hardware, and AIs to build models (AdaNet, etc.) will replace what is now a lot of manual effort.

For hybrid systems, I have implemented enough code in Racket Scheme to run pre-trained Keras dense models (code in my public github repos) and for a new side project I am using BERT, etc. pre-trained models wrapped with a REST interface, and my application code in Common Lisp has wrappers to make the REST calls so I am treating each deep learning model as a callable function.

What do you think about conceptual AI? AI models that are able to create, test, modify ideas/concepts on the fly. We need a breakthrough…

millions of people training to be deep learning practitioners may have short careers

Have database admins disappeared? How about front end devs?

I don’t know if database admins have disappeared. But we have never needed one to take care of our DynamoDB tables and our “serverless” Aurora databases.

Even though I’m pretty sure AWS needs a lot of them; although not one (or more) for each and every single one of their customers.

Random forests produce the same kind of decision trees that used to be hand-crafted, but admittedly, the ones they generate look distinctly “non-human”

It’s really hard to make predictions… especially about the future.[1] But to the extent that I have anything to say about this, I’ll offer this:

1. For all the accomplishments made with Deep Learning and other “more modern” techniques (scare quotes because deep learning is ultimately rooted in ideas that date back to the 1950’s), one thing they don’t really do (much of) is what we would call “reasoning”. I think it’s an open question whether or not “reasoning” (for the sake of argument, let’s say that I really mean “logical reasoning” here) can be an emergent aspect of the kinds of processes that happen in artificial neural networks. Perhaps if the network is sufficiently wide and deep? After all, it appears that the human brain is “just neurons, synapses, etc.” and we manage to figure out logic. But so far our simulated neural networks are orders of magnitude smaller than a real brain.

2. To my mind, it makes sense to try and “shortcut” the development of aspects of intelligence that might emerge from a sufficiently broad/deep ANN, by “wiring in” modules that know how to do, for example, first order logic or $OTHER_THING. But we should be able to combine those modules with other techniques, like those based on Deep Learning, Reinforcement Learning, etc. to make hybrid systems that use the best of both worlds.

3. The position stated in (2) above is neither baseless speculation / crankery, nor is it universally accepted. In a recent interview with Lex Fridman, researcher Ian Goodfellow seemed to express some support for the idea of that kind of “hybrid” approach. Conversely, in an interview in Martin Ford’s book Architects of Intelligence, Geoffrey Hinton seemed pretty dismissive of the idea. So even some of the leading researchers in the world today are divided on this point.

4. My take is that neither “old skool” symbolic AI (GOFAI) nor Deep Learning is sufficient to achieve “real AI” (whatever that means), at least in the short-term. I think there will be a place for a resurgence of interest in symbolic AI, in the context of hybrid systems. See what Goodfellow says in the linked interview, about how linking a “knowledge base” with a neural network could possibly yield interesting results.

5. As to whether or not “all of intelligence” including reasoning/logic could simply emerge from a sufficiently broad/deep ANN… we only just have the computing power available to train/run ANN’s that are many orders of magnitude smaller than actual brains. Given that, I think looking for some kind of “shortcut” makes sense. And if we want a “brain” with the number of neurons and synapses of a human brain, that takes forever to train, we already know how to do that. We just need a man, a woman, and 9 months.

[1]: https://quoteinvestigator.com/2013/10/20/no-predict/

[2]: https://www.youtube.com/watch?v=Z6rxFNMGdn0&feature=youtu.be…

[3]: http://book.mfordfuture.com/

“fell by the wayside at the beginning of the AI winter”

I believe the various aspects of the Semantic Web are a continuation of symbolic AI. My two cents as a complete outsider on the topic.

Yes, this. Alternatively, what’s the logical next step once the semantic web is realized? Ask yourself where wikidata is going in the long run.

There is some interesting work using graph embeddings (like word embeddings)to add data and relations to semantic web style knowledge graphs.

They are, but the successful part of the Semantic Web is almost entirely limited to open-source datasets (grouped under the “Linked Data”, or “Linked Open Data” initiative). That’s pretty much the only part of the web that actually has an incentive to release their info in machine-readable format – everyone else would rather control the UX end-to-end and keep users dependent on their proprietary websites or apps.

Next Post

Mind control, levitation and no pain: the race to find a superman in sport

Sun Apr 21 , 2019
https://www.theguardian.com/books/2019/apr/18/superhuman-sport-cold-war-mind-power-men-on-magic-carpets-ed-hawkins-extract

You May Like