In the field of natural language processing (NLP), data is king. The more data you have, the better your results. Most new research is freely accessible these days and, thanks to the cloud, there is unlimited computing power at our disposal. What keeps an NLP researcher from achieving state-of-the-art results despite this is the lack of good data.(more…)
Writing real-time applications is hard; harder still, if it needs to be distributed and fault tolerant. Minimizing latency and optimizing throughput are major goals for such applications. They demand quick response times and positive user experience even in the event of a failure of an external system or a spike in traffic.
The legacy approach is to rely on external services, such as databases, queues, etc., to handle concurrent or asynchronous operations. However, this is not viable in many scenarios. Many real-time systems—such as trading or banking applications—cannot afford the long waiting times it takes to handle concurrent requests.
How often do you stumble on a new word while reading? If you are like me, you wouldn’t look it up in a dictionary. You will carry on reading as your mind intuitively figures out the meaning. We have to hand it to our brains. Over the course of millennia, the human brain has become adept at many things, including language and object recognition. So even if we don’t know exactly what we are looking at, our brains fill in the gaps by providing a rough interpretation.
What if we could create intelligent machines that could do the same—learn by associating things, just like the human brain? It is a question that has puzzled and driven curious minds for generations. This post takes a look at the evolution of Artificial Intelligence (AI)—the making of the thinking machine.