A flow diagram represents transformer architecture. On the left, x leads to embedding, then positional embedding, M S A, M L P, add and L N, and finally, M C A. On the right, y leads to embedding, then positional embedding, M M S A, add and L N, M C A, add and L N, M L P, add and L N, linear, soft max, and output probabilities.

Fig. 3

The transformer architecture. It consists of an encoder (left) and a decoder (right) block, each one consisting from a series of attention blocks (multi-head and masked multi-head attention) and MLP layers. Next to each element, we denote its dimensionality. Figure inspired from [4]

From: Chapter 6, Transformers and Visual Transformers

Cover of Machine Learning for Brain Disorders
Machine Learning for Brain Disorders [Internet].
Colliot O, editor.
New York, NY: Humana; 2023.
Copyright 2023, The Author(s)

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.