AI experiments you can tinker with

Sketch2Code, Microsoft
Sketch2Code uses artificial intelligence to convert drawings into working HTML code. This is achieved through the use of computer vision, a combination of AI and image processing.
Like most AI projects, Sketch2Code was trained to recognise different HTML or web-design elements, like a text box or a button. Text recognition is also used to identify any text in the design provided to the model. All this leads to the generation of an HTML code that mirrors the design or layout you provided.

Talk to Books, Google
Google’s ‘Talk to Books’ is an AI powered search tool with which you can engage yourself in a conversation with a book. Whenever you type in a statement or a query, the model will look through over 100,000 books to find responses that are most likely to come next in the conversation or can be used to directly answer the question you are asking. You will get better results from this tool if you search using full sentences instead of keywords.
This model is trained using natural language inputs, primarily English but can be used for other languages as well. Note that the model does not necessarily provide accurate answers or give you something related to the topic. It’ll only provide you with sentences that pair up with your original query.
AutoDraw, Google
AutoDraw is a web-based drawing tool that uses artificial intelligence to turn ordinary scribbles into beautiful clipart. The tool predicts what you are trying to draw by processing your scribbles and then provides suggestions of objects or images that bear resemblance to your scribbles. Upon clicking on one of the suggestions, your original drawing gets replaced by the image you select.
AutoDraw constantly learns from its users. Every time a user scribbles a certain pattern, and selects something the tool offers as a suggestion, it’ll know that its prediction was accurate. Similarly, whenever we redo a drawing, just to give it a hint of what we actually want, AutoDraw will realise that its predictions were not accurate.

Quick Draw, Google
Quick Draw is a game developed using machine learning and neural network, and its working procedure is quite similar to AutoDraw. You draw something on the canvas and it’ll try to predict what you are drawing. Quick Draw uses the same technique Google uses to recognise our handwriting. It notices two things about our drawings - the pattern itself, and how the pattern was drawn.
Quick Draw has the world’s largest doodling dataset with over 50 million drawings in more than 300 categories. Every time someone plays the game, their doodles become part of this dataset. Just like other AI tools, Quick Draw is also dependent on users for its training. The more we use it, the better it’ll get.

Teachable Machine, Google
Teachable Machine is a web-tool designed to develop machine learning models. You, as a user, can provide the model with live data. Acceptable forms of data are images, audio files, and body poses or postures. If you’re unable to provide live data with your webcam or microphone, you can upload pre-existing data from your computer. Once the model receives the required data, it’ll go on to train itself to see whether it can accurately classify the examples you’ve provided. Afterwards, you can export the model to try it out.
Teachable Machine is an amazing AI tool. Its working process shows you how a basic AI model or machine learning actually works. If you’re willing to work with AI and machine learning, you should check out the different tools built using Teachable Machine, and then play around with the model and its code.
While these experiments are largely basic and are made in such a way that you’ll want to sink hours of your free time into them, they also serve a greater purpose. With machine learning and artificial intelligence making leaps and bounds already, human interaction is a much needed element in making these machines legitimately intelligent. So why not help usher in the new masters of humanity by trying these AI experiments out?
Comments