Wednesday 19 November 2014

Nifty Trick to Prove f(n)∉cn^2

Let's use this as an example to show this little nifty trick (Not really a trick but I thought it was interesting)

                     6n5-4n3+2n ∉ O(5n4-3n2+1)

Now, normally, people would try to prove this is a normal way by converting the 6n5-4n3+2n ∉ O(5n4-3n2+1) into it's definition form then proving the statement. That is totally fine and it is not a wrong method to prove the statement.

The nifty trick that I learned from my tutorial is to switch the sides of the polynomials and prove that 5n4-3n2+1 ∈ O(6n5-4n3+2n), because if 6n5-4n3+2n is not upper bounded by 5n4-3n2+1 then that means 6n5-4n3+2n grows faster than 5n4-3n2+1. Therefore, that means 5n4-3n2+1 is upper bounded by 6n5-4n3+2n.


∃c∈R2, ∃B∈N, such that ∀n∈N, n≥B → f(n)≤cn2
           Pick c=1/3, then c∈R2
Pick B=1, then B∈N
                     Assume n∈N
                                         Assume n≥1
                                                             Then, 6n5-4n3+2n ≥ 6n5-4n3
                                                                                              ≥ 6n5-4n5
                                                                                              ≥ 2n5
                                                                        2n5 ≥ (1/3)(6n4)    # n≥1                                                                                 ≥ (1/3)(5n4+n4)
                                                                                ≥ (1/3)(5n4+1)
                                                                                ≥ (1/3)(5n4-3n2+1)
                                                              Then, 6n5-4n3+2n ≥ c(5n4-3n2+1)
                      Then, ∀n∈N, n≥B → 6n5-4n3+2n ≥ c(5n4-3n2+1)
 Then, ∃c∈R2, ∃B∈N, such that ∀n∈N, n≥B → f(n)≤cn2
Then, 5n4-3n2+1 ∈ O(6n5-4n3+2n)
Then, 6n5-4n3+2n ∉ O(5n4-3n2+1)

Thursday 13 November 2014

General Big-O Kappa

Time to prove some general statements about Big-O, because why not. It's interesting and fun!

The most important definition that is needed to prove general statments about Big-O is F
        F : { f : N → R≥0 }
This definition is basically stating that the F is the set of all functions that takes a natural number as input returns a non-negative real number.

For example, let's prove this statement:
f,g,h F, ( f ∈ O(g) ∧ g ∈ O(h) ) ⇒ f ∈ O(h)
The statement in english is basically stating that f is upper bounded by g which means it grows no faster than g and g is upper bounded by h which means it grows no faster than h. Then, f must be upper bounded by h meaning it must grow no faster than h.

Now there are a few things to think about. This means that there is a breaking point(B) and a constant factor (c) so that f grows no faster than g. At the same time there is another breaking point (B') and a second constant factor (c') so that g grows no faster than h. Now, this means we want to find a third breaking point (B'') and a third constant factor (c'') so that f grows no faster than h beyound B''.
B'' should be at a point where it is beyond both B & B'. ∴ B''=max(B,B')
To choose c'', f ≤ cg ≤ c(c'h), therefore f ≤ cc'h. ∴ c'' = c'

from here on, it's all smooth sailing on prooving this statement since we have found the perfect B'' and c'' that makes f grow no faster than h. Just assume and pick antecedents, prove that f(n) ≤ c''h(n), and finally unwrap the antecedents and voila! :)

Tuesday 11 November 2014

Big-Oh of polynomials

Definition of O(n2)
Basically, f(n) is in O(n2) iff ∃c∈R+, ∃B∈N, such that ∀n∈N, n≥B ⇒ f(n)≤cn2

In english, the statement is saying that there exists a number i that is greater than B (breakpoint) such that f(n) is upper bounded by cn2.

Now when proving a polynomial is upper bounded by O(n2), it is all about picking c and B.
- The first thing is to pick a c which should be larger than than the constant factor of the highest-order term in the polynomial. - Next is to pick a B, usually B = 1 works in most cases but should be careful when doing so.

For an example, 7n6-5n4+2n3 ∈ O(6n8-4n5+n2)
Some things to consider or make assumption is to:
1) assume n ≥ 1
2)Upper bound the left side by overestimating
2)Lower bound the right side by underestimating
3)Finally, choose a c that connects the two bounds.

7n6-5n4+2n3 ∈ O(6n8-4n5+n2)
pick c = 9/2
pick B = 1
     assume n∈N
         assume n≥1
             then 7n6-5n4+2n3 ≤ 7n6+2n3      #-5n4 ≤ 0
                                           ≤  7n6+2n6 = 9n6      #n3 ≤ n6since n ≥ 1
                                           = (9/2)2n6 = c2n6     # c = 9/2
                                           ≤ c2n8 = c(6n8 - 4n8) ≤ c(6n8 - 4n5)     # -n8 ≤ -n5
                                           ≤ c(6n8-4n5+n2)     # 0 ≤ n2
            then 7n6-5n4+2n3 ≤ c(6n8-4n5+n2)
 <introduce the antecedent and quantifiers and you are done!>

I think the biggest advice before tackling such proofs is to first set  6n8-4n5+n2 as the largest(which the polynomial be upper bounded to) and set the 7n6-5n4+2n3 (the polynomial to be bounded by O(n2)) to be the lowest. Afterwards you slowly decrease the upper bound and increase the lowest and meet halfway  by choosing a c that connects the two bounds.

Monday 10 November 2014

Algorithm Complexity

Basically for running time:
- Count the number of steps
- Constant factors doesn't matter
- And finally, only the highest-order term matters

So basically if an n and any other similar polynomials with different constant factors or anything that follows the n2 are in the same class of O(n2).

Now O(n2) is an asymptotic notation.
Then, O(f(n)) is an asymptotic upper-bound.
Which means, the set of functions that grows no faster than f(n).

Ω(f(n)) is the asymptotic lower-bound.

θ(f(n)) is the asymptotic tight-bound.

Now, on to the time complexity of a program.
Basically, every line (initialization, giving value to a field, etc) counts as one line. The tricky parts are when a program has loops or recursive functions inside the program. When taking the worst case for loops I was told to always take the last value of an array or something similar to it and therefore most loop's worst case takes n number of steps. Now when there is another loop inside the loop and you take the worst case, you basically assume the same thing which means the inside loop takes n number of steps and therefore the entire loop will take n*n number of steps. Afterwards, you add the other number steps that aren't in a loop and you have your time complexity of a program.

Friday 24 October 2014

Penny Pile

Interesting problem handout from lecture today. It is called the Penny Pile Problem. Basically you have two boxes, the left box contains 64 pennies and the right contains nothing. Now, there are two operations that you are allowed to use in order to move pennies from one box to the other.

L: If the left box has an even number of pennies, you can transfer half of them to the right box. If the      left box has odd numbers of pennies, you can not use this operation.
R: If the right box has an even number of pennies, you can transfer half of them to the left box. If the      right box has odd numbers of pennies, you can not use this operation.

Now, the problem is to arrange things so that one of the boxes has 48 pennies using the operations from above.

So, let's just carry out the operations.
(64,0)
(32,32)
(48,16) - Done!
Well, that was easy.

Now, are we able to choose any number from a range of 0 to 64 and arrange it so that we are able to achieve that number? I think it is definitely possible.
Let's begin from (64,0), the only step is to divide the pennies into two piles each containing 32 pennies. Therefore, I think I will tackle the problem from 32 since once we reach 32, we are able to get back to 64. From here on, my intuition is to work backwards from the chosen number and reach 32. If the chosen number is bigger than 32, then the other number can be arranged to become 32 and reach 64.

Proofs

Yesterday was my tutorial quiz day and we had to prove one statement in the quiz. Before the quiz, I was pretty confident in myself with the quiz because proving a statement is basically making assumptions, proving the consequent and then unwraping the assumptions. I was pretty stupid not to solve some extra questions and practice proving statements. The question was similar to the one on the tutorial exercise:

For all x that is in whole numbers, there exists a y in the whole numbers that makes x = 3y + 1. Then there exists a y in the whole numbers that makes x^2 = 3y + 1.

It's pretty simple right?
Square the antecedent x = 3y +1 to x^2 = 9y^2 + 6y + 1
Factor out 3 to make x^2 = 3(3y^2 +2y) + 1
Then the 3y^2 + 2y is equivalent to the consequent's y.
And then unwrap all the assumptions and the proving is done.

The question that the class received in the quiz was a little altered and when I squared the antecedent and factored out a common number, the numbers inside the bracket did not equal to the consequent's y. I was going insane on why I was not able to solve it and in the end I could not figure it out.

CSC165 is a strange course for me because half the course is more about logics and the way you think and tackle each problems in an interesting way. I was suprised that I did so poorly on my assignment #1 because it was very confusing for me but the average was 80% which is insanely high in my opinion. I need to start studying more for CSC165 and up my game in the assignments and quizzes.

Friday 26 September 2014

Folding Problem

The basic gist of this problem is that you have a strip of a paper. You hold it with both hands from the left end to the right end. Now, you fold it, so that the left end is on top the right end. Repeating it several times and unfolding the whole thing while maintaining the right end with your right hand.

Now, the paper has crevices that are pointing up and down. The problem is to figure out how to predict whether you can predict the sequence of the crevices for the number of folds that you have done.

I haven't found a full solution for this problem, however I do think if we translate this into coding, recursion is the answer. I think the sequences of the crevices from folding 1 time can be used to predict the sequence of the crevices from folding 2 times, or 3 times, and so on. Moreover, as you jot down the data for folding once, twice, thrice, and so on. you can see the same sequence that has appeared from the previous folding sequence. This is definitely an interesting problem that can be easily solved by programming it.