There are a huge number of languages used by programmers today - some people may wonder "why do we need so many?" There is ONE single language understood by CPUs everywhere - machine code - however, writing machine code is tedious, unreadable by anyone else, and overall just impossible to write. Different machine code must be written for different CPUs - the languages we use on a daily basis bridge the gap between machine code and humans. Some langauges, such as Assembly Language or C, are low-level, meaning they are closer to machine code than others. Generally speaking, the closer to machine code a language is, the more you need to know about hardware. These low-level languages are more optimized for the CPU, although optimization of high-level languages is becoming less of an issue, according to some.
Programming languages often learn from others and just add on newer features. Some are functional, some are object-oriented, and some are procedural. Some support variations of the three. Some are dynamically typed, while some are statically typed. Some support concurrency and multithreading, and some don't. Some, like Erlang, take the concept of concurrency and attempt to implement it without threads. To put it succinctly, the reason we have so many languages is because they all have their own use cases, and it is up to us to decide which language will best suit our needs.
Compiled v Interpreted Languages
Compiled languages use a compiler to convert the source code into machine code - that way, the code is packaged up as machine code and can be sent to a target CPU. The compiled file is called an executable/executive file. Pros are that source code remains private, it often runs faster because code has been pre-converted and optimized for a specific target CPU, and it is ready to run as soon as the target machine gets it. Cons are that it isn’t cross-platform (compiled for a certain CPU), it’s not flexible and also compilation is an extra step.
Interpreted languages don’t use a compiler - source code is sent to the target machine where it is then interpreted into machine code. The target machine processes it on the fly, processing it line by line. It doesn’t save it as a separate file (like an executable file). Pros are that they’re cross-platform, easier to test (since you directly run the source code without a compilation step), and it’s easier to debug since you have access to source code. Cons are that an interpreter is required, processing it is often slower since it isn’t precompiled, and source code is public.
Both can be used - we can compile source code to an “intermediate language” which is converting it to machine code as far as we can take it while maintaining it’s ability to be cross-platform. We then send this to target machines, and those machines finish the compilation (also known as JIT or just in time compilation) - the intermediate language used is also referred to as “byte code”.
Scripting v non-Scripting Languages
Scripting languages are used alongside non-scripting languages in large systems - because they're largely interpreted, it is useful to code small parts of the system that may be subject to frequent change. Those parts can be optimized for flexibility since they don't need to be recompiled after changes are made. Compiled parts of a system that are optimized for speed of execution are often written using compiled (non-scripting, in this case) languages.
Scripting languages are often used for rapid prototyping (they are faster to code with since they tend to be high-level and are flexible), data wrangling and general experimentation.
An example of a scripting language also being an interpreted language is in the command line - the commands we use are written in a shell scripting language, which is then interpreted, line by line, by the (in the case of a standard Mac terminal) bash interpreter.