Networks, Got some grainy footage to enhance, or a miracle drug you need to discover? No matter the task, the answer is increasingly likely to be AI in the form of a transformer network.
Transformers, as those familiar with the networks like to refer to them in shorthand, were invented at Google Brain in 2017 and are widely used in natural language processing (NLP). Now, though, they are spreading to almost all other AI applications, from computer vision to biological sciences.
Transformers are extremely good at finding relationships in unstructured, unlabeled data. They are also good at generating new data. But to generate data effectively, transformer algorithms often must grow to extreme proportions. Training language model GPT3, with its 175 billion parameters, is estimated to have cost between $11 million and $28 million. That’s to train one network, one time. And transformer size is not showing any sign of plateauing.
Transformer networks broaden their view
What makes transformers so effective at such a wide range of tasks?
Ian Buck, general manager and VP of accelerated computing at Nvidia, explained to EE Times that, while earlier convolutional networks might look at neighboring pixels in an image to find correlations, transformer networks use a mechanism called “attention” to look at pixels further away from each other.
“Attention focuses on remote connections: It’s not designed to look at what neighbors are doing but to identify distant connections and prioritize those,” he said. “The reason [transformers] are so good at language is because language is full of context that isn’t about the previous word but [dependent] on something that was said earlier in the sentence—or putting that sentence in the context of the whole paragraph.”
Transformers
For images, this means transformers can be used to contextualize pixels or groups of pixels. In other words, transformers can be used to look for features that are a similar size, shape, or color somewhere else in the image to try and better understand the whole image.
“Convolutions are great, but you often had to build very deep neural networks to construct these remote relationships,” Buck said. “Transformers shorten that, so they can do it more intelligently, with fewer layers.”
Remote the Connections
The more remote the connections a transformer considers, the bigger it gets, and this trend doesn’t seem to have an end in sight. Buck referred to language models considering words in a sentence, then sentences in a paragraph, then paragraphs in a document, then documents across a corpus of the internet.
To connect to a remote PC, that computer must be turned on, it must have a network connection, Remote Desktop must be enabled, you must have network access to the remote computer (this could be through the Internet), and you must have permission to connect. For permission to connect, you must be on the list of users. Before you start a connection, it’s a good idea to look up the name of the computer you’re connecting to and to make sure Remote Desktop connections are allowed through its firewall.
How to enable Remote Desktop
The simplest way to allow access to your PC from a remote device is using the Remote Desktop options under Settings. Since this functionality was added in the Windows 10 Fall Creators update (1709), a separate downloadable app is also available that provides similar functionality for earlier versions of Windows. You can also use the legacy way of enabling Remote Desktop, however this method provides less functionality and validation.
Should I enable Remote Desktop?
If you only want to access your PC when you are physically using it, you don’t need to enable Remote Desktop. Enabling Remote Desktop opens a port on your PC that is visible to your local network. You should only enable Remote Desktop in trusted networks, such as your home. You also don’t want to enable Remote Desktop on any PC where access is tightly controlled.