One needs to be fluent in it in order to be able to work in fields like:
Data Science
Machine Learning
Software Engineering
and others...
The specialization covers [[discrete mathematics]] ideas and concepts needed in various branches of computer science. To make the learning process more efficient and enjoyable, we use the following active learning components.
Interactive puzzles provide you with a fun way to "invent" the key ideas on your own. The puzzles are mobile-friendly, you can play with them anywhere. The goal of every puzzle is to give you a clean and easy to state problem where nothing distracts you from inventing a method for solving it. In turn, the corresponding method usually has a wide range of applications to various problems in practice.
Autograded quizzes allow you to check your understanding right after you learn a new concept or idea.
Code snippets are helpful in two ways:
they show you how ideas from discrete mathematics are used in programming, and
they serve as interactive examples: tweak the given piece of code, run it, and see what happens.
Programming challenges will help you to solidify your understanding. As Donald Knuth says, "I find that I don't understand things unless I try to program them."
A.1) Using Python
We will also use [[Python]] code snippets to understand the discrete math ideas with applications.
Why Python?
High-level language: The syntax is reader friendly
Interactive mode: we can use the [[read-eval-print loop (REPL)]] for instant feedback from the machine
Libraries: we can generate a random sequence, plot and graph in just one line of code.
Disadvantage: High-level = less flexible in performance tuning.
The first part of this discrete math course is about [[proofs]] in general. But why do programmers need proof? We are not mathematicians.
If you are writing a videogame or a search engine, you probably do not need to prove its correctness.
In the other hand, if you write an [[operating system]] or a [[cryptographic protocol]], you will probably need to prove that the [[operating system]] will not stop at some point or prove that the [[cryptographic protocol]] is safe.
B.1) What is a proof?
A [[proof]] is an argument, a convincing argument.
To get a [[proof]] first we need to understand the problem.
The goal of this course is to: *understand, invent, explain and enjoy [[proofs]].
If you are given a puzzle, you should immediately stop and think about it before you look up the solution.
Did you know?
"God has a book that contains all the proofs."
—Paul Erdős
Paul Erdős was a renowned Hungarian mathematician known for his work in number theory, combinatorics, and many other fields, as well as for his prolific collaboration with other mathematicians. The reference to "God's book" is a famous metaphor he used to describe a hypothetical book in which the most elegant and beautiful mathematical proofs are written down.
Inspired by this concept, the publication "Proofs from THE BOOK" presents a collection of elegant proofs, showcasing the beauty of mathematics that Erdős so revered. (authors, Martin Aigner and Günter M. Ziegler)
B.2) Proof by Example
So, to understand proofs we will begin with an example.
Problem 1 "Can a chess board (8x8) be tiled (no overlaps or empty spaces) by domino tiles (1x2)?"
The Answer: YES! Let's prove it!
We can show an example of one solution as a proof,
This is known as a [[Proof by Example]].
Problem 2
Now, let's consider another problem: "Can a chessboard (8x8) with a missing square in one of its corners be tiled (no overlaps or empty spaces by domino tiles (2x2)?"
Let's try to find an example:
Questions:
Did we find a tiling, and thus prove that a solution exists?
Ans: No
Did we prove that the solution does not exist?
Ans: No! We haven't tried all the possibilities.
Can we prove that the solution either exist or not exist? Can we do one of these 2 proofs with examples?
Ans: No. At this time it is impossible, but we will see the [[impossibility proof]] in the next section.
B.3) Impossibility Proof (part 1)
Let's go back to the problem 2.
Problem 2 "Can a chessboard (8x8) with a missing square in one of its corners be tiled (no overlaps or empty spaces by domino tiles (2x2)?"
The Answer: No. But why? Can we prove it?
↳ One cell will always remain uncovered, because the chessboard has an odd number of cells. There are .
So, we can use 31 tiles to cover 62 cells, because:
but 1 cell will always remain uncovered.
↳ We just proved that it is not possible. This is our [[impossibility proof]].
Problem 3
Now, let's consider another problem: "Can a chessboard (8x8) with 2 missing cells at opposite corners be tiled (no overlaps or empty spaces) by domino tiles (2x2)?"
Let's use our argument again, but with a new problem (will it work?).
Now we have an even number of cells, there are .
↳This means we could use 31 files to cover the board, right?
Question:
Is this argument correct?
Ans: I don't know, let's find out with some examples
Example 1:
In this attempt we could not cover 2 cells.
Our domino tiles are also 2 cells each, but the position of the red cells is not appropriate for a domino tile. So even with an "even"number of cells we failed to tiled the chessboard.
Maybe we did something wrong, let's try again.
Example 2:
Same problem...
The 2 remaining cells cannot be covered by the shape of 1 domino tile.
Question:
Should we try more possibilities or is it impossible to solve? Ans: Let's find out in the next section.
B.4) Impossibility Proof (part 2 and conclusion)
Let's go back to the problem.
Problem 3 "Can a chessboard (8x8) with 2 missing cells at opposite corners be tiled (no overlaps or empty spaces) by domino tiles (2x2)?"
The Answer: No. But why? Can we prove it?
For our second [[impossibility proof]], we need deeper understanding on the arrangement of the chessboard. Let's color the board as follows,
Now we can see that the 2 corners that we removed are both black.
This means we don't have the same number of black cells as we have white cells. 32 white and 30 black. But why is this important?
Let's see how the domino files cover the board,
One domino tile always require 1 black cell and 1 white cell.
This means that in order to cover the board, not only do we need an even number of cells, but also we need to have the same number of black cells as we have white cells.
B.5) Recap: Proofs and Theorems
So, in formal mathematics, we would have:
A [[theorem]]
and a [[proof]]
So from our problem 3 we could say the following,
Theorem
A chess board (8x8) without 2 opposite corners cannot be tiled by (1x2) dominos.
And we can also show the [[proof]] of our [[theorem]]
Proof:
A regular chessboard has 4 black and 4 white cells in each row and thus we can know the total number of black and white cells in a regular chessboard. 32 blacks and 32 whites.
Opposite corners are always of the same color. For example, let's say they are black
With that example we would have a board with less black cells than white cells(30 black cells and 32 white cells).
a domino tile always has 2 different colors
and because we cannot pair 2 white cells together, we will have a white cells uncovered. "[[Quod Erat Demostradum"(QED)]]"
which means "that is what we wanted to prove" (in Latin).
B.5.1) Conclusions: Lessons Learned
What did we learn about proofs with these examples?
[[Proofs]] can be really convincing
A [[theorem]] is a mathematical statement that can and must be proven to true.
So, if we consider a new problem and we claim: Problem 4 "we can tile a (8x8) board without 2 non-opposite corners (say, the left bottom and left top ones)" We must prove it!
Indeed, we already know that if # of blacks is not equal to the # whites, the tiling does not exist. But having the same number of black and whites cells does not imply that a solution exists. We have yet to prove that!
We can prove it with an example:
B.5.2) Bonus Problem!
Let's prove another theorem
Problem 5 "we can tile a (8x8) board without 2 arbitrary cells of different colors by (1x2) dominos"
To prove this we must consider the following "path" on our chessboard:
This "path"has the shape of a snake.
This snake is useful to make the following argument, but first let's make an example of a board without 2 arbitary cells of different color
B.6) Existence Proofs (🎦 folder 3)
B.6.1) One Example is Enough
Now we will learn one of the simplest kinds of proofs, which is called "[[The existence proof]]".
In order to provide the [[proof]] , we one need one example.
In an "[[existence proof]]" we have to consider the "[[existential statement]]".
This statement claims that an object with given properties exist.
In mathematical notation, the "[[existential statement]]" is defined as follows:
So, we are saying that "there exists an example x where the property proposed P is satisfied as P(x)"
And to prove this claim it's enough to show one example.
Note: it is also important to check for other examples, but one example is enough to provide a proof.
Let's see how it works on some simple geometric problems.
This problem is about cutting geometric shapes into congruent pieces (pieces of the same shape and size).
B.6.2) Another Puzzle: Splitting an Octagon
Let's solve another puzzle!
This problem is also about cutting geometric shapes in to congruent pieces (pieces of the same shape and size).
But now we must split an irregular octagon (a polygon of eight angles and eight sides).
Let's try to find examples as [[proofs]],
B.6.3) Having Fun in Real Life: Tensegrities (Optional)
Let's see an example in Real Life. We will study a type of structure named a [[tensegrity]].
Can we find an example of a structure, with the property which states that the compressed members cannot touch each other and can only be connected by cables.
In order words, can we prove that ? ("there exists an x such that P(x)")
B.6.3) Know your rights; proofs do not require explanation
Let's return to the [[existencial statements]] and their [[proofs]].
We said that one example is all we need to provide a [[proof]], but there is also another property of "[[existence proofs]]" which says that it is not necessary to explain how we got the example for the proof, the only important part is that it exists.
Let's solve another example to understand this concept:
Example 1: Can we find a 2 digit number that becomes 7 times smaller after the first digit is deleted?
Let's find out, imagine we have a number with digits "ab", then:
Conclusion: the number 35 becomes 7 times smaller after the first digit is deleted
Example 2: Can we find a n digit number that becomes 57 times smaller after the first digit is deleted?
Questions:
How do we find a solution?
Ans: I don't know (yet), but hey here is a solution: 7125 = 57 (125)
Is this enough to prove that such n digit number (x) exists and is 57 times smaller after the 1st digit is deleted as stablished by P(x)?
Ans: Yes! , it is 7125
Do we have the right to ask for a procedure?
Ans: No! the example is all we need.
Can we find a procedure for this example?
This problem does have a procedure and it is the following:
Consider a n digit number like a b ... z
This number must be 57 times bigger than the number without the 1st digit so:
x = b ... z has k digits (n-1)
Therefore, in decimal notation we have,
We can pass x on the right side and factorize,
From the factors of , one multiple of 8 is , therefore k = 3
We solve for x,
Conclusion: 7125 becomes 57 times smaller after the first digit is deleted:
7125=57(125)
We can take powers of 10 and it also works, for example,
71250 = 57(1250), and so on...
B.6.4) Nobody can win all the time: Non-existing examples
This section is a warning!
Sometimes, when you are trying to find an example for a [[proof]], this example might not exist!!! No matter how hard you try, sometimes there is no example that can help you prove something.
Let's see how this works with the following problem.
B.6.4.1) The Backpack Problem
Problem: Imagine you want to carry some weights with 2 identical backpacks.
You also want to split the weights so that you carry the same amount in each backpack.
How would you split the weights in the following cases?
Case 1: 3 weights: 1kg, 2kg, 3kg
One backpack carries 1kg + 2kg, and the other backpack carries 3kg,
Case 2: 6 weights: 1, 2, 3, 4, 5, 7 How can we split them to 2 groups of equal weight?
First, let's determine what is the total weight?
Total weight = 1 + 2 + 3 + 4 + 5 + 7 = 22 kg
We want to split 22 into 2 groups of equal weight
22/2 = 11 for each group
How can we get 2 groups of 11 each?
Note: There can be cases when the task is impossible, we can have obstacles that won't let us find a solution. Consider the following case.
Case 3: 6 weights: 1, 2, 3, 4, 5, 6 How can we split them to 2 groups of equal weight?
First, let's determine what is the total weight?
Total weight = 1 + 2 + 3 + 4 + 5 + 6 = 21 kg
We want to split 21 into 2 groups of equal weight
↳ 21/2 = 10.5 kg for each group it is impossible!
We cannot get that number by adding our available weights.
Case 4: 6 weights: 2, 4, 6, 8, 10, 12 How can we split them to 2 groups of equal weight?
First, let's determine what is the total weight?
Total weight = 2 + 4 + 6 + 8 + 10 + 12 = 42 kg
We want to split 42 into 2 groups of equal weight
↳ 42/2 = 21 kg for each group
But we have a problem... 21 is odd, and all our weights are even (in fact they are 2 times the weight of case 3). And we cannot get an odd number by adding even numbers. So finding an example, a solution IS IMPOSSIBLE!!!
As we have seen in the previous cases, it seem that we have some obstacles when we search for a proof for our "existential statement" like:
When the total sum is odd,
or when the quotient is also odd and all our weight are even.
But, are there other type of obstacles? And, can we count the total number of obstacles this problem has?
Let's find out with the following case.
Case 5: 6 weights: 1, 2, 3, 4, 5, 17 How can we split them to 2 groups of equal weight?
First, let's determine what is the total weight?
Total weight = 1 + 2 + 3 + n + 5 + 17 = 32 kg
We want to split 32 into 2 groups of equal weight
↳ 32/2 = 16 kg for each group.
BUT! We have one weight of 17, so there is no way to split the weights to 16 kg in each backpack. IT IS IMPOSSIBLE To SOLVE!!!
This is yet another type of obstacle...
Are these bad news?
Ans: Yes
B.6.4.2) Bad News for our Backpack problem...
Obstacles for our proof can be of different types
Odd Total Sum,
Odd Quotient with Even weights,
weights bigger than Quotient,
etc.
There is no complete list of obstacles, we don't know all the obstacles this problem has. Therefore, we cannot check if the problem has a solution or not before we try to solve it.
This is an [[NP-Complete Problem]], which means it is an infeasible problem to solve.
In our case examples we could show easily if a solution exists or not. But if we were working with 10, 000 weights, then we don't know if there is an algorithm to check whether they can be split into 2 groups or not.
In fact, we can not even prove that such an algorithm doesn't exist. This is given by the P vs NP problem. (check "Extra Content" for more info).
The Knapsack Problem
En algoritmia, el problema de la mochila, comúnmente abreviado por KP (del inglés Knapsack problem) es un problema de [[optimización combinatoria]], es decir, que busca la mejor solución entre un conjunto finito de posibles soluciones a un problema. Modela una situación análoga al llenar una mochila, incapaz de soportar más de un peso determinado, con todo o parte de un conjunto de objetos, cada uno con un peso y valor específicos. Los objetos colocados en la mochila deben maximizar el valor total sin exceder el peso máximo.
The knapsack problem has been studied for more than a century, with early works dating as far back as 1897. The name "knapsack problem" dates back to the early works of the mathematician Tobias Dantzig (1884–1956), and refers to the commonplace problem of packing the most valuable or useful items without overloading the luggage.
We are now closer to the boundary of modern of computer science knowledge, now we know about proofs and even about [[NP-Complete Problems]], which have been tried to be solved, but with no success.
The following is the Take-home Message for this 1st week.
C) Take-home Message (💬 week1)
The structure of the proof reflects the structure of the claims. It depends on what we want to prove
The structure of proofs in mathematics typically mirrors the nature of the statements or claims being made.
Here's a list of types of statements and the corresponding structure of proofs that reflect them:
Existential Statements: Claims that something exists.
Existence Proof: Demonstrating the existence of at least one example that satisfies the statement.
Universal Statements: Claims that something is true for all elements in a certain set.
Direct Proof: Providing a logical argument that the statement is true for all possible cases.
Proof by Construction: Constructing a specific object to show that a certain property holds for all members of a set.
Conditional Statements: Claims in the form "if ... then ..."
Direct Proof: Assuming the "if" part and showing the "then" part follows.
Proof by Contrapositive: Instead of proving "if P then Q", proving "if not Q then not P".
Uniqueness Statements: Claims that there is one and only one object that satisfies a certain property.
Uniqueness Proof: Showing that at least one object has the property and that no second object can have the property without being identical to the first.
Impossibility Statements: Claims that a certain condition or object cannot exist.
Proof by Contradiction: Assuming the opposite of the claim and showing that this assumption leads to a contradiction.
Equivalence Statements: Claims that two statements are logically equivalent.
Bi-conditional Proof: Proving both directions of the equivalence, often structured as two separate conditional proofs.
Constructive Statements: Claims that provide a method to find or construct an object with a certain property.
Constructive Proof: Giving an algorithm or method that leads to the object's construction.
Non-Constructive Statements: Claims that an object exists without providing a method for finding it.
Non-Constructive Proof: Proving existence based on general principles without providing an explicit example or method of construction.
Combinatorial Statements: Claims about the arrangement, combination, or selection of a set of objects.
Combinatorial Proof: Employing counting arguments, often involving combinatorial identities or bijections.
Inductive Statements: Claims that are proven using the principle of mathematical induction.
Proof by Induction: Showing the base case is true, and then proving that if the claim holds for one case, it holds for the next.
Remember, the specific approach to proving a statement may vary depending on the context and the specific conditions of the problem. Each of these types of statements calls for a particular method of proof that aligns with the nature of the claim being made.
If the statement is "existential", then you need only one example to prove it.
Claim: an object with some properties exists
Proof: an example (one example is enough)
no need to disclose the sources of the proof.
Beware! Sometimes the claim you want to prove may be false. Don't spend the rest of your life looking for a solution that doesn't exist. In some cases, we can prove that it doesn't exist, but in other cases sadly we can't.
A) How do we know which code is "best"? (Reverse String Example)
What is Big O Notation?Let's begin with an example
Imagine we have multiple implementations of the same function in a computer program.
For example, Imagine we have 10 different variations which all work and can solve our problem.
How do we know which is "best"?
Let's use a concrete example: We are given the following task, "Write a function that accepts a string input and returns a reversed copy of that string "
A.1) What does "best" even mean? Why should we care?
It would be nice to have a scale that could help us score our code. So we would know if our code is "Great! ""OK"or even "Awful". But how can we get such scale?
This is where Big O Notation can help us!
It is going to help us talk about our code and compare our code.
But, why should we compare our code? Why should we even care?
Because...
it is important to have a precise vocabulary to talk about how our code performs.
it is useful for discussing trade-offs between different approaches and can help us make decisions.
when our code slows down or crashes, identifying parts of the code that are inefficient can help us find bottlenecks and pain points in our applications
it comes up in interviews...
B) Understanding "Time Complexity" (Total Sum Example)
Let's use a SIMPLE example: We are given the following task: "Write a function that calculates the sum of all numbers from 1 up to (and including) some number n."
Here are 2 implementations using JavaScript:
Implementation 1
It uses a for loop and it adds the counting variable n times. For each iteration it executes one sum (total += i)
Implementation 2
This second version is smaller and it uses Gauss formula. This function executes the formula only once.
How do we know which one is better?
What does "better" even mean?
Is it the...
faster one?
less memory intensive? (the one that takes up less space?
We can see that with small values of n, the timer is unpredictable. This is the problem with timers, but now let's try using larger inputs so that we put our machine into some heavy work.
2nd Experiment:
Using a larger input size (1 - 1,000,000,000)
This is a not at all practical method to test our code, but now we can see a clear trend. Our 2nd implementation is much faster than the 1st one when
That is the approach we are going to take in evaluating our code. We are going to think about how it performs as the input gets towards "infinity".
But, we are not going to make a graph and chart our code every single time we evaluate the performance of our code relative to other code.
So, instead of working with "time", we will work with the "number of operations" the computer has to perform. With this, we will be able to determine how our code performs just by looking at it without having to run our code and print a graph.
We will be able to roughly approximate what that chart will look like if we were to plot it.
Let's see how this works with our previous examples.
B.2) A better method, counting operations
Counting Operations -2nd implementation: So, we count the "number of operations" with a very simplified way of counting.
Counting Operations -1st Implementation: Now if we analyse the 1st implementation, things are different.
But it doesn't really matter the exact number of operations, the bigger picture trend shows that as n grows (), the "number of operations" is going to grow (at least) proportional to n (because of the "for loop")
Depending on what we count, the number of operations can be as low as or as high as .
But as we said, the exact number of operations doesn't really matter. When n is large, there is no significant difference between or . or even just "", the graph will have the shape of a straight line.
C) So... What is Big O Notation?
computer science, and [[mathematics]] to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines.
Landau's symbol comes from the name of the German number theoretician Edmund Landau who invented the notation. The letter O is used because the rate of growth of a function is also called its order.
In other words, Big O Notation is a "fuzzy" way of "formalized counting".
We say "fuzzy" because we are not being very specific, we are just zooming way out and imagining the graph of "number of operations" as n gets larger.
It is a nice way of describing how good our code is,
is it "Good"(and fast)?
Or is it "Awful"(and slow)?
Except, instead of using "colors" to describe the performance of our code. With [[Big O]],
we actually describe it with a mathematical function, which has the following definition:
C.1) Formal Definition: Big O Notation
Formal Definition - Big O Notation
is an element of if and only if there exists a positive constant (or sometimes also written as ) and some such that for allgreater than, is smaller than
Or written in formal mathematics:
I know, the definition can be confusing. But it is way clearer if you see the definition as a graph,
So, what this definition is telling us is that:
The number of simple operations a computer has to do in an algorithm, for example: is an element of
where can be:
linear
quadratic
or a constant
if and only if in some interval where is large,
For example: , whenever
What is this ""notation telling us?
It is telling us that the amount of operations (the time it takes) is going to grow roughly in proportion with as grows.
Did you know?
Just like we have computer science to describe the limiting behavior of functions, often to characterize the efficiency of algorithms.
In summary:
[[Big O]] is about the worst-case scenario.
[[Big Omega]] is about the best-case scenario.
[[Big Theta]] is when the worst-case and best-case are the same.
It's worth noting that in practice, [[Big O]] is the most commonly used notation because it provides an upper limit on performance, which is typically the most critical information when assessing the viability of an algorithm. However, knowing all three gives a complete picture of an algorithm's efficiency.
C.2) Applying Big O Notation to our Example (Total Sum)
Let's go back to our example
Fuzzy Counting -2nd Implementation:
We saw there were 3 operations, regardless of the size of n ().
But with "fuzzy"counting we just want the big picture, so we would call this , also known as constant time.
This means that as n grows, the number of operations we have to do stays the same.
And if we were to plot that in a chart, it would look like a flat line. Time doesn't increase if me increase the input "n".
Now let's compare the 1st implementation.
Fuzzy Coonting -1st Implementation
In this case, we know that the number of operations is going to grow in proportion with n ()
But we also know that is bounded by a multiple of (say, ).
So, the general trend is (also known as linear time).
This means that the time it takes for our function to get a results grows as a straight line as we increase n.
Note: we saw that with small values of n, the trend was not clear, but with larger values of n the trend is clear, it is a straight line.
C.3) Applying Big O Notation to more Examples
Let's review more examples:
Example 2: *"Write a function that takes a number "n" and prints all of the numbers from 0 until n and then prints them from n to 0 "
Here is our code:
It has 2 separated for loops, we could count the operations as , but the general trend is just . Therefore it has linear time.
Example 3: "Write a function that prints all the pairs of numbers between 0 and n"
For example: 01, 02, 03, ... , 11, 12, 13, ... , nn
Here is our code:
Now we have a nested for loop (not good for [[time complexity]], we don't get fast code).
If we count operations, we now see that we have to do roughly operations (due to the nested for loop).
In this case we have also known as quadratic time.
This means that the trend now has the shape of a parabola, which grows much faster than a straight line.
C.4) Comparing the Big O Time Complexities functions
In fact, we can compare two of our functions in JavaScript. We can compare addUpTo(n) which runs in linear time and printAllPairs(n) which runs in quadratic time.
So, here is a chart that shows some of the most common [[Big O time complexities functions]]:
Big O Notation
Time Complexity
Description
O(1)
Constant Time
It is the best, but it is very rare
O(log n)
Logarithmic Time
Comes often when dealing with [[data structures]] like [[trees]] and [[sorting algorithms]]
O(n)
Linear Time
Time is proportional to n
O(n^2)
Quadratic Time
Very slow with large numbers of n (but it is sometimes faster when dealing with small n). Suggestion: just avoid it!
C.5) Simplifying Big O Expressions (rules of thumb)
When determining the [[time complexity]] of an algorithm, there are some helpful rules of thumb for Big O expressions.
These rules of thumb are consequences of the definition of [[Big O notation]].
Rules of Thumb:
Let's see more Big O Shorthands: Note: These rules won't always work, but are a helpful starting point.
D) Test what you learnt (more examples)
Let's consider some other examples... What is the [[time complexity]] of the following functions?
Example 1: "write a function that shows in the console a list of numbers of at least 5 numbers from 1 to 5 and at most n numbers from 1 to n"
Here is our code:
Example 2: "write a function that shows in the console a list of numbers of at most 5 numbers from 1 to 5 and at least n numbers from 1 to n"(where )
Here is our code"
Note: [[Space Complexity]]
So far, we've been focusing on [[time complexity]]: how can we analyze the runtime of an algorithm as the size of the inputs increases?
We can also use [[big O notation]] to analyze [[space complexity]]: how much additional memory do we need to allocate in order to run the code in our algorithm?
E) More Info: Time and Space Complexity in Data Structures and Algorithms
Did you know?
Just like we have [[Big O Notation]], we also have [[Big Omega Ω Notation]] and [[Big Theta Θ Notation]]. These are all notations used in [[computer science]] to describe the limiting behavior of functions, often to characterize the efficiency of algorithms.
In summary:
[[Big O]] is about the worst-case scenario.
[[Big Omega]] is about the best-case scenario.
[[Big Theta]] is when the worst-case and best-case are the same.
It's worth noting that in practice, [[Big O]] is the most commonly used notation because it provides an upper limit on performance, which is typically the most critical information when assessing the viability of an algorithm. However, knowing all three gives a complete picture of an algorithm's efficiency.
A decision problem has only two possible outputs (yes or no) on any input.
Read more here
B.2.1) Complexity Class "P"
Let's see what these classes mean....
P class: It is the class of all problems which have an algorithm that can be computed in "polynomial time". Or in [[Big O Notation]], when the "[[time complexity]]" is where and is the input size.
Remember
P stands for "Polynomial Time"
P es el conjunto de problemas en los que podemos encontrar una respuesta al problema en un tiempo razonable con un algoritmo polinomial,
In other words...
The class P contains all problems a computer can solve in a reasonable amount of time.
NP class: it is the class of problems that can be verified in "polynomial time" but may take an exponential number of steps to solve. Or in [[Big O Notation]], the "[[time complexity]]"is when is the input size.
Remember
NP stands for "Non - deterministic Polynomial Time"
NP es el conjunto de problemas en los que podemos comprobar en un tiempo razonable si una respuesta es correcta o no.
En otras palabras... NP = Problemas que se comprueba que "algo es solución" en tiempo polinomial
Examples:
B.3) What is the important difference between P and NP?
Computers take an extremely long time to solveNP problems...
For example:
If a problem is in the class P and has 100 inputs. And its algorithm is proportional to (polynomial time).
The algorithm could solve the problem in about 3 hours let's say...
If the problem is in NP class with a completion time proportional to (exporential time).
The algorithm will take roughly 300 quintillion years to solve. That is longer than the age of the universe.
Although NP problems take a very long time to calculate, they also take a relatively short time to verify.
Consider Sudoku!
Sudoku is an NP problem, it takes some effort and time to solve. But once is solved, it is pretty quick to check that it is correct.
We said previously that the P vs. NP problem can roughly be translated to: "If a problem is easy tocheckthe solution to, is it also easy tofindthe solution to?"
In other words...
Does P = NP?
What are we really asking here?
We are asking whether complexity classes, P and NP are in fact the same class.
That is, can we perhaps solve these exponential problems in polynomial time?
Why is this even a question? P and NP seem to be very different things, why would anyone think they are the same?
One reason could because: "since has not been proven, there is still a possibility that , and that is what we want to prove.
Is ?" Note: most computer science researchers don't think .
C.1) What would happen if P = NP?
To understand why this is even a question, we should consider what would happen if ?
They are all NP problems, everything would be more efficient if we could find the most optimal solutions to all these problems. Anything that is feasible to check would become feasible to solve.
Other NP problems like "protein folding", would have "polynomial time" solutions and we could cure cancer.
As MITcomplexity researcherScott Aaronson said:
But we would also have some bad news....
Encryption (for banks) and privacy safe all rely on .
If all our passwords and our information would be feasible to get by anyone.
C.2) Has a NP problem ever been solved in Polynomial Time?
This algorithm goes through an entire array and compares each neighboring number.
It puts the larger value on the right and the smaller value on the left.
The runtime of this algorithm is of the order of .
So, if numbers to be sorted, it will take a maximum of about 10,000 steps to sort the away. But can we do better?
It was deemed that they couldn't be solved in polynomial time, but then some clever person came along with an amazing algorithm and managed to transport the problem from the NP realm to the P realm.
Wait...
So if it was shown that SOME NP problems were in fact equal to P, doesn't that prove that ?
In computational complexity theory, a problem is NP-complete when:
a brute-force search algorithm can solve it, and the correctness of each solution can be verified quickly, and
the problem can be used to simulate any other problem with similar solvability.
The name "NP-complete" is short for "nondeterministic polynomial-time complete".
In this name, "nondeterministic" refers to nondeterministic Turing machines, a way of mathematically formalizing the idea of a brute-force search algorithm.
Polynomial time refers to an amount of time that is considered "quick" for a deterministic algorithm to check a single solution, or for a nondeterministic Turing machine to perform the whole search.
"Complete" refers to the property of being able to simulate everything in the same complexity class.
C.3.1) So, P = NP (complete) is still unsolved...
So the question of whether still isn't solved, because "NP" actually refers to NP complete problems,