LAWS OF COMPUTER SCIENCE

Nkugwa Mark William
3 min readDec 31, 2022

--

The laws of computer science are a set of principles and concepts that form the foundation of the field of computer science and inform the design and use of computer systems. These laws are often used as guiding principles when designing algorithms, programming languages, and computer systems, and they can help us to understand the limitations and capabilities of these systems.The Law of Unintended Consequences: This law states that the actions of a complex system can have unintended consequences that are difficult to predict or control. In the context of computer science, this means that even well-designed systems can have unforeseen consequences when they are used in the real world.

  1. The Law of Requisite Variety: This law states that the ability to control a system is directly proportional to the number of different states that the system can be in. In other words, the more complex a system is, the more difficult it is to control and predict its behavior.
  2. The Law of Simplicity: This law states that the most effective and efficient systems are those that are simple in design and operation. By keeping systems simple, we can reduce complexity, minimize errors, and make them more understandable and maintainable.
  3. The Law of Imperfection: This law states that all systems, including computer systems, are imperfect and will always have some level of error or uncertainty. This means that we must always be prepared to handle errors and uncertainties, and design systems that are resilient in the face of these challenges.
  4. The Law of Conservation of Complexity: This law states that the complexity of a system cannot be reduced indefinitely. There is a limit to how simple a system can be made, and any additional complexity that is removed from one part of the system will inevitably reappear elsewhere.
  5. The Law of the Fast Algorithm: This law states that the fastest algorithm for solving a particular problem is the one that can be executed the most times within a given time period. In other words, an algorithm that is fast but takes up a lot of resources (such as memory or processing power) may not be as effective as a slower algorithm that is more efficient in its use of resources.
  6. The Law of the Iterative Algorithm: This law states that an iterative algorithm (one that repeats a set of steps multiple times) will always converge on a solution, given enough time and resources. However, the rate of convergence (the speed at which the algorithm approaches a solution) may vary greatly depending on the specific algorithm and the problem being solved.
  7. The Law of the Monolithic Algorithm: This law states that monolithic algorithms (those that are large and complex, with many interconnected parts) are generally less efficient and less flexible than modular algorithms (those that are composed of smaller, self-contained units). This is because monolithic algorithms are more difficult to maintain and modify, and are more prone to errors and bugs.
  8. The Law of the Minimal Surface: This law states that the minimal surface of an algorithm (the surface of the algorithm that is visible to the user) should be as simple and intuitive as possible. This means that the interface and user experience of an algorithm should be designed to be as straightforward and easy to understand as possible, while still providing all of the necessary functionality.
  1. The Law of the Minimal Invocation: This law states that the number of times an algorithm is invoked (called) should be minimized, as each invocation carries a cost in terms of time and resources. By reducing the number of times an algorithm is invoked, we can improve the efficiency and performance of the system.

--

--

Nkugwa Mark William
Nkugwa Mark William

Written by Nkugwa Mark William

Nkugwa Mark William is a Chemical and Process engineer , entrepreneur, software engineer and a technologists with Apps on google play store and e commerce sites

No responses yet