System Calls
Library calls like printf or scanf, it searches through libraries. A system call is a mechanism that provides the interface between a process and the operating system.
- It is typically written in high level language (C, C++) and is accessed through Application Programming Interface (APIs) rather than direct system calls
- Common APIs are
Win32
for windows, POSIX API
for POSIX-based systems (LINUX and Mac OS)
- System calls are the interface between an application program and the operating system.
- They provide a way for the application to request and carry out low-level services such as reading and writing files, creating processes, and allocating memory.
- The system calls are typically the most basic functions provided by the operating system, and higher-level libraries and APIs are built on top of them.

→ Sequence of systems calls to copy contents of one file to another
Device Status Table
Device-status table contains entry for each I/O device indicating its type, address, and state.
Direct Memory Access Structure
- Direct Memory Access (DMA) is a capability provided by some computer bus architectures that allows data to be sent directly from an attached device (such as a disk drive) to the memory on the computer's motherboard.
- Device controller transfers blocks of data from buffer storage
directly to main memory without CPU intervention
- Only one interrupt is generated per block, rather than the one
interrupt per byte



Cache
- Cache is a high-speed data storage layer which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data's primary storage location.
- Copying information into faster storage system; main memory can be viewed as a cache for secondary storage
<aside>
💡 There are different algorithms to determine which data to put in cache
</aside>
Parallel Computing
- Parallel computing is a type of computing where multiple processors work together to solve a problem.
- It is designed to take advantage of multiple processors, cores, or computers to increase the speed of processing and to solve problems faster.
- In parallel computing, the problem is divided into smaller tasks that can be executed simultaneously by different processors, reducing the time it takes to solve the problem as a whole.
Distributed Computing (can’t confirm the info, this was written by notion’s AI)
- Distributed computing is a type of computing where multiple computers are connected together to solve a single problem.
- It involves the sharing of computing resources, data, and storage across multiple computers, enabling them to work together to solve a problem.
- In distributed computing, multiple computers can work on the same task in parallel, reducing the time it takes to complete the task.
- It is used in various applications, such as cloud computing, big data, and scientific computing.

Reliable Computing
To do one program on different processors, so that you get more reliable outcome, Ex: Setting coordinates for a satellite launch
Multiprocessing
A type of computing where multiple processors work together to solve a problem. It involves multiple CPUs or cores working together to perform tasks, with the goal of increasing processing power and improving system performance.

Symmetric Multiprocessing
- All processors have equal access to all system resources and work together to share the workload.
- Each processor can run any task at any time
- The operating system manages the distribution of tasks among the processors.
- Shared memory
Asymmetric Multiprocessing
- One processor is designated as the master and is responsible for managing the system, while the other processors are slaves and only perform tasks assigned by the master.
- The master processor has direct access to all system resources and communicates with the slave processors to coordinate the distribution of tasks.
- No shared memory
<aside>
💡 In general, SMP provides a more flexible and scalable solution for multiprocessing, while AMP is used in specialized systems where one processor needs to have more control over the system.
</aside>


A Dual-Core Design
- A dual-core system is a computer that has two processing cores integrated into a single physical processor.
- Each core is a separate processing unit that can execute instructions independently, allowing for multiple tasks to be processed simultaneously.
- A mode of operation in which two or more processors in a computer simultaneously process two or more different portions of the same program
- A bus which connects all the processors or cores in the memory (accessing the same memory)
- All the processors on the same device will have same time difference but on multiple devices, it will be different

Advantages of Dual-Core Systems
- Increased Performance
- Better Multitasking
- Improved Power Efficiency
- Improved Price-Performance Ratio

Disadvantages of Dual-Core Systems:
- Limited Compatibility
- Higher Complexity
- Increased Cost (Relatively expensive)
- Limited Scalability