The company my partner and I run provides writing, editing, layout, general marketing, and other services to all kinds of companies, but most of our clients are in high tech, including several clients in the supercomputing industry. Whenever I bring in a new editor, booth helper, or anyone else, I send this mini-glossary of industry terms to help them come up to speed.

Especially at a trade show, these terms are constantly tossed about, so if you’re newly stepping into this industry, it’s good to be at least passably familiar with them so as to avoid feeling permanent confusion.

accelerator devices. Hardware components designed to perform certain functions faster than would be possible with software running on a CPU. Some devices, such as GPUs (graphics processing units), can be programmed to perform many kinds of functions. Other accelerator devices such as FPGAs (field-programmable gate arrays), DSPs (digital signal processors), and ASICs (application-specific integrated circuits) are often designed to perform one specific function.

API.  Application programming interface (or to oversimplify: programming language). This is the interface between the software that a developer writes and a software library. The API defines how the programmer must write code that accesses functions and data in the library.

CPU. Central processing unit. This has the processor chip + some of the electronics to make it work.

discrete. This refers to a component or device that is separate from another component or device. “Discrete” is in contrast to “integrated.” For example, some computers use CPU processors with graphics capability integrated inside the CPU chip, while other computers implement graphics using a separate (discrete) GPU chip.

embedded systems. This is an important market that has been around for ages but is rapidly growing. These are small computers that are embedded into other things. You’ll find them in manufacturing (and other) robots, cars, medical devices, handheld devices, and probably in modern refrigerators. The difference between a mere sensor (collects data and sends it on) and an embedded system is that the embedded system has compute capabilities to process the data then and there.

GPU. Graphics processing unit. This may be used inside a computer to drive the display device, and in many cases may be used to perform non-graphical numerical computations faster than the CPU (see accelerator devices).

heterogeneous. Another word for shared memory in that it accesses the memory on the CPU and also the GPU. The “hetero” aspect refers to mixing CPUs and GPUs. Or more pedantically, mixing different kinds of CPUs and accelerators of any kind.

HPC. High-performance computing, also called supercomputing.

modern processors. This is not a special term, but I include here because too many marketing folks use it to denote the manycore-chip processors of today as opposed to the single-core processors of years gone by. As I said: not a special term. Don’t use it.

open source. Refers to a culture and the principles of decentralized development where the design of a product is publicly accessible and can be examined, modified, or extended by anybody. Contrast this with  the similar open standard, below.

open standard. An open standard is jointly defined and maintained by a group of engineers from across the industry: hardware and software vendors, major universities, and more. They develop the standard to run well across their own platforms, yet the cooperative nature of this development prevents any one party from developing the standard to their own sole benefit. This means all those involved, and the industry at large, benefit. But open standards are not strictly open source (unless they are explicitly set up to be), because only members of the controlling organizations are allowed to participate in their development.

parallel application. Computers used to have a single processor, so programs ran serially: perform one task, then the next, then the next, and so on. Computers may have multiple processors, and each processor may have many cores, but programs must be written to run in parallel in order to take advantage of those processors. In a parallel application, instead of running, say, 50 tasks one at a time serially, they run all 50 tasks at the same time in parallel.

platform. A generic term for the hardware and software environment in which software runs. Depending on the context, it may include the operating system, processor(s), or other underlying software or hardware. Examples of platforms: Linux, Intel Scalable System Framework, HPCC (High-Peformance Computer Cluster), IBM Platform HPC.

portable. One of the two key features of code written in an open-standard API is that it is portable: which means you write your code for one platform, your lab upgrades to a different platform, and with only small tweaks, your code can run on the new platform because you wrote it using an open standard. If you wrote your program using a proprietary API, it will need to be almost entirely rewritten if the platform changes. Portability is never absolute; it’s a slippery, relative thing; you always have it to some degree. Programmers and customers want the most portability so that their software will run anywhere. But hardware vendors who supply software are often happy if their software runs only on their own hardware, as this locks their customers into their brand of hardware. It’s a constant technical, business, and political balance (and is the cause for some real acrimony).

proprietary.  A proprietary API is one that is maintained by a vendor for running on that vendor’s platform. For example, CUDA is the proprietary API from NVIDIA for use on NVIDIA systems.

scalable model. After portability, scalability is the second key aspect of writing software. It’s easy for a programmer to write software that runs fast on small problems, but unless it is scalable, it will fail on large problems. As computing problems get ever larger (big data), scalability is a top concern.

shared memory. Memory inside a computer that can be accessed simultaneously (or virtually so) by multiple programs or components. For example, a supercomputer may have fast memory available privately to each CPU, and slower memory on a shared bus that all CPUs can access.

Tagged with:

Leave a Reply

Your email address will not be published.

Set your Twitter account name in your settings to use the TwitterBar Section.