Are neural networks the future of AI? How important are they?

Scott E. Fahlman, Professor Emeritus, Carnegie Mellon, LTI and CSD

Image result for neural network

Complicated question, and one for which there is no real consensus among the experts.  I’ll very briefly state my own opinion, but that is not necessarily a majority view.

For what it’s worth, I’m one of the relatively few researchers who have done serious work on both neural-net learning algorithms (including some that did a kind of deep learning 25 years ago) and on symbolic methods for human-like, common-sense knowledge representation, reasoning, planning, and language understanding.

I believe that deep-learning neural nets (but not necessarily the algorithms being used now) will play a very important role in the future of AI. If we want to emulate human capabilities, I think that the neural nets will pretty much take over the lower-level parts of sensory-motor processing, and speech/language understanding, probably up to and including the learning of sequential word patterns and syntax.

Roughly, this is the stuff that we humans do without being aware of what is going on or how we learned it: standing, walking, reaching and grasping, throwing; picking out the words in a noisy stream of speech; recognizing objects, their parts, and spatial relations in a scene.

I don’t believe that neural networks, as currently understood, will take over higher-level conscious thought and planning (including creative planning and design); the symbolic parts of knowledge representation and inference; and language understanding/generation tasks that involve meaning. We will need symbolic representations for these things. I will be surprised if distributed “thought vectors” are adequate representations for these tasks.

In the human, it is pretty clear that this higher-level, more symbolic stuff must also be implemented in some sort of neural network — that’s all there is in the brain — but it these neurons are not operating like current feed-forward or generative neural-net models. Instead, these networks are functioning more like conventional computers that manipulate symbols, but with some massively parallel symbolic search and inference capabilities built in.

The neural-net and symbolic levels have to work together, and what happens at the interface is a very interesting area for investigation.  It’s pretty clear that the lower-level pattern-recognition parts are influenced by our expectations, some of which come from higher-level reasoning; it’s also pretty clear that the pattern-recognition and pattern-learning parts must be able to cause the creation of new symbols and relations that are accessible to the higher-level symbolic machinery.

By the way, my use of the terms “higher-level” and “lower-level” is not a value judgement, just a shorthand for the way most people classify certain mental functions.  Some of the “highest level” cognitive tasks, such as chess and calculus, were among the first things that AI researchers solved, while “lower-level” tasks such as manual dexterity and recognizing objects from images are only now starting to make real progress towards human-like performance.

Again, that is just one researcher’s best guess about where things are headed in AI.  Read what other researchers are saying and you will get a variety of other viewpoints and guesses.

Advertisements

Amazon ready to Disrupt the Market

Opinions expressed by Forbes Contributors are their own.

The Platform as a Service (PaaS) market is going through metamorphosis. A key driver of this change is the container revolution, led by Docker. Every PaaS vendor in the market has refactored its platform for containers. On the other hand, the combination of orchestration tools such as Kubernetes, Mesos, and Docker, is becoming an alternative to traditional PaaS. The line between container orchestration and PaaS is getting blurred. For enterprises and decision makers considering PaaS, the current market landscape looks complex and confusing. Amidst all this chaos, one vendor who is quietly redefining PaaS is Amazon Web Services.

Amazon EMR

 

Having invested heavily in the core building blocks of infrastructure – compute, storage, and networking; Amazon has been steadily moving up the stack to focus on platform services. From its vantage point, AWS has visibility into top customer use cases and deployment scenarios. By carefully analyzing what customers run in its infrastructure, AWS is building new managed services that are quickly becoming an alternative to self-hosted workloads. Amazon RDS, AWS Directory Services, Amazon Elastic File System, Amazon WorkMail, Amazon WorkDocs, and Amazon EC2 Container Service are a few examples of these services. AWS wants customers to sign-up for its managed services instead of following the DIY approach. In its current form, AWS can support everything a small and medium business needs. From hosted desktops to file sharing to collaboration to backup and archival, Amazon has it all. Beyond enterprise and business applications, it is now eyeing developers by offering a parallel universe of application lifecycle management in the cloud. The new family of code management services such as AWS CodeDeploy, AWS CodeCommit, AWS CodePipeline, handle the entire lifecycle of a cloud-native application. Amazon is in the process of building a brand new PaaS that is very different from the rest.

Amazon API Gateway – an application programming interface management layer – is the latest addition to the AWS application services portfolio. Though it might just look like another service from AWS, this has the potential to become the cornerstone of AWS’ PaaS strategy. Amazon is calling this service the “front door” for applications to access data, business logic, and functionality from back-end services. API Gateway is another classic customer workload that became a managed service on the AWS cloud. So, how does this service enable Amazon to disrupt the PaaS market?

Last year at the AWS re:Invent Conference, Werner Vogels unveiled a killer microservices platform called AWS Lambda. In a Gigaom Research report entitled Why AWS Lambda is a Masterstroke from Amazon, I analyzed the importance of this service. What’s special about Lambda is that it is a true NoOps platform. Developers bring their autonomous code snippets that get invoked by an external event. Since its inception, AWS has been regularly adding Lambda hooks for popular services like S3, DynamoDB, Kinesis, and SNS. It recently added Java language and JDK to this microservices platform. Though it was tempting to port the bulk of the business logic and workflow from monolithic apps to AWS Lambda, the service didn’t support exposing the code snippets as REST endpoints. Developers had to rely on service hooks to indirectly trigger Lambda functions.

One of the most powerful aspects of the new Amazon API Gateway, is its integration with AWS Lambda. Developers can upload code snippets to Lambda and expose it as a standard REST endpoint hosted by the API Gateway, which essentially becomes the facade to the microservices platform. This service eliminates the need to spin up an EC2 instance that runs business logic exposed as an API. What’s more? Developers can point and click to configure an API key, throttling, bursting, caching and even adding a custom domain. Finally, they can also generate native SDKs of their APIs for Android, iOS, and JavaScript. This combination of AWS Lambda and API Gateway becomes a powerful microservices platform without the tax of scheduling, orchestration, monitoring, logging and security. Both API Gateway and AWS Lambda are elastic, enabling the developer to focus on the logic and code. Through the integration of CloudTrail and CloudWatch, performance metrics and logs are instantly available. Microservices hosted in AWS Lambda can consume AWS SDK to communicate with other services such as Amazon RDS and Amazon DynamoDB. This deployment topology makes applications highly available, scalable and secure with no operations required. Deploying the same applications on a traditional PaaS involves quite a bit of configuration and management.

 

But API endpoints and code do not make an application complete. It needs an interface to become web applications and mobile apps. Since the heavy-lifting is offloaded to AWS Lambda, all that the developer needs to do is to host the web application that consumes the API exposed by the API Gateway. This is where Amazon S3’s web hosting feature comes in handy. Designers and developers can build beautiful web interfaces based on Bootstrap, AngularJS, or other JavaScript frameworks. Since API Gateway supports the generation of JavaScript SDK, it can be consumed in static web applications hosted in Amazon S3. The same API can be targeted by native Android and iOS applications. For authentication and security, the application can be integrated with Identity and Access Management (IAM). This configuration completely avoids the need to spin EC2 instances dedicated to hosting applications. The combination of S3, API Gateway, and AWS Lambda delivers scale without the need for administration.

 

CloudHealth Technologies!

We’re in the midst of one of the most profound transitions in IT history: the movement to the cloud. While the benefits – such as lower upfront costs, reduced management requirements, on-demand scaling – are widely understood, managing, optimizing and securing cloud infrastructure is a different story. It can actually be substantially more challenging than traditional data center infrastructure because of its dynamic nature.

As a business begins to utilize multiple cloud providers, as well as their own data centers, these problems compound. Pressure mounts on engineering teams to automate processes and lower costs. Leaders shift valuable engineering resources away from core product development to cloud maintenance. Ops teams write programs to help automate instance purchasing and management, but cloud providers constantly change their pricing structure and technology infrastructure, rendering internally built technologies obsolete.

Welcoming CloudHealth Technologies!

This is why we invested in CloudHealth.

CloudHealth is a Cloud Service Management platform that takes the pain out of managing cloud deployments and puts the power back in the hands of business users (in particular, the CFO & CIO). The platform provides customers with a centralized console for users to manage their hybrid and multi-cloud infrastructure. It integrates directly with cloud providers’ infrastructure, enabling customers to optimize and automate instance purchasing to take advantage of pricing changes, allows a business user to automate all rules, policies and governance, and gives security professionals visibility into real-time risks.

In reducing complexity, CloudHealth adds immediate ROI for all stakeholders. Engineers can return their focus to the core product, finance teams are able to cut costs and gain visibility into usage by team and expense bucket, and CIOs can improve and optimize governance.

CloudHealth Technologies’ exceptionally strong growth and unit economics demonstrate the unique value customers achieve with the product. It is emerging as the clear leader in the category, has some of the strongest retention cohorts we’ve seen and is extremely well positioned to capitalize on the continued growth and complexity of hybrid and multi-cloud environments. Importantly, while the business has scaled rapidly, culture has scaled thoughtfully (just check out the Glassdoor reviews).

The CloudHealth Technologies team should be incredibly proud of what they’ve accomplished to date. In particular, Dan Phillips (C-Founder & CEO), Joe Kinsella (Co-Founder & CTO) and Larry Begley (CFO & one of the first institutional investors while a GP at .406 Ventures) deserve tremendous credit for the team they’ve assembled and the business they’ve built.

We are thrilled to welcome the entire CloudHealth Technologies team to the Kleiner Perkins family!