Monday 1 December 2014

Okay turns out I lied, the last post is not my final slog post because I have yet to do a slog on a problem solving episode. In this slog, I will talk about the diagonals problem. So the problem is as follows, we are given a rectangular grid made up of m rows and n columns (the symbols n, and  m represent positive whole numbers). If we draw a line from the top left corner to the bottom right corner, how many grid squares will the line pass through?

For this problem I worked with a partner and I our first action was understanding the problem. We decided that the problem was that we needed to find some sort of pattern or formula that will tell us how many grid squares a diagonal line will cut through in a given rectangle with m rows and n columns.

This is what the rectangle looked like with a diagonal through it.


In this rectangle, we have 4 rows and 6 columns. The shaded squares represent the 8 squares that the diagonal line passes through.

So our plan was to at first draw a few diagonal lines starting from the top left corner to a different bottom right corner. We wanted to see what results we would get with different number of rows and columns and see if there were any patterns.

We did this by using the same grid and increased/ decreased the number of rows and columns. We still started drawing the diagonal line from the top left corner, but every time we would choose a different bottom right corner and see how many grid squares the line passes through.

At this point, we came up with a few results, let's call p the number of squares the diagonal line passes through

If = 4, n = 1, then = 4
If m = 4, n = 2, then p = 4
If m = 4, n = 3, then p = 6
If = 4, n = 4, then = 4
If = 4, n = 5, then = 8
If = 4, n = 6, then = 8

We noticed that when there are an odd number of columns, the number of squares for that number of columns is equal to the number of squares for the next column amount of columns. But then we were thrown off when m = 4 and n = 4. We thought we were going to get p = 6 as well, but it turns out that when m = n, p = m = n. So we then asked ourselves, "So what does this mean?". Well at first we weren't even sure ourselves.

Currently I am still trying to figure out this problem. I have to admit that I don't actually have the answer to this problem, but it doesn't mean I won't get it. The reason why I'm stuck is because we only tested a couple of cases for when m = 2 and n varying. My partner did other examples with a different number of rows and columns but when we came back to discuss our findings, we were still puzzled. Before I attempt to figure out the solution, I need to test for maybe an odd number of rows or perhaps a larger or smaller number of rows. Right now the amount of information that I have is sort of insufficient.

Before I published this slog, I went to check my partner's slog and surprisingly enough, my partner was able to solve it. Basically what he figured out is that the number of squares that the diagonal line passes through is equal to the number of columns plus the number of rows minus the highest common factor of the both the number of rows and columns.

So in a formula it would be: + n - HCF(m, n) = p

And it indeed definitely works for the cases that I've tested. Am I 100% this is the correct solution? No but its something right?

Anyways this problem was quite interesting and is very different to what we regularly do in 165.

I also recommend going over to my partner's slog where he goes over what he did to solve this problem: randomcsc165blog.wordpress.com

Oh and if someone actually does know the correct solution and understands it fully, please leave a comment! Perhaps a link to your slog as well.

Saturday 29 November 2014

Welcome to my final slog for CSC165. It’s crazy how we are already at the end of the term. In this slog, I will be quickly going over weeks 11 and 12 and also briefly go over Big Theta. In weeks 11 and 12, halting, computability, and countability were the main topics we covered.

Going off last week’s slog, I talked about Big Oh and Big Omega which are the lower bound and upper bound. Big Theta is basically in between. Big Theta represents the average case. Where a function is both lower bounded and upper bounded by another function. In notation statement would be:

f Θ(g)
This means:

{f : + | C1 +, C2 +, B , n , n ≥ B C1g(n) ≤ f(n) ≤ C2g(n)}

Notice in the beginning of the statement where there is a natural number and positive real number notation. That means that we are passing in a natural number as the input and then getting a positive real number as the output for function f.

Now onto halting, computability, and countability. Computers usually solve problems with algorithms in a systematic way. Though there are problems that cannot be solved by any algorithms. Halt functions are non-computable.

In CSC165, we learn to decide whether a function is computable or not by using reductions.

Suppose we have a non-computable function f, we can take perhaps a well-defined function g which could be extended to build f.

g computable f computable

We say f reduces to g, and using the contrapositive:

f non-computable g non-computable

To prove that a function is computable, show that this function reduces to a function g that is computable.

To prove that a function is non-computable, show that a non-computable function f reduces to this function.

Let’s take the function initialized(f, v) where f is a function and v is a variable is checked to be initialized before its first use in f.

Then for a function f1(x), where it returns x+1 and prints a variable y after, initialized(f, v) will return True because y will never be used in f1.

For a function f2(x), where it returns x+y+1, initialized(f, v) will return False because y has not been initialized before being used in the function.

Now for a problem, lets prove that initialized(f, v) is non-computable. To do that we need to show that some non-computable function reduces to initialized(f, v). This non-computable function can be halt(f, n), where f is a function and n is a variable input. So we need to show that halt(f, n) reduces to initialized(f, v), in other words, implement halt(f, n) using initialized(f, v).

First we create the function halt(f, n) with initialized(f, v) inside the function as well as another function that initialized(f, v) can use for f. For this function, let’s say g, it will take an argument (x) and call the function f with argument n that was passed in with halt(f, n) and then call print (v). At the end of halt(f, n), we will return whether initialized(g, v) will return True or False.

If in the function g, if f(n) halts, then we get to print(v), but v has not been initialized anywhere in the code, so when we call “return not initialized(g, v)”, it will return True.

If in the function g, if f(n) doesn’t halt, then we never get to print(v) so when we call “return not initialized(g, v)” will return False. 

Every pair of (function, input) will never properly be matched up with the correct True/ False values. This comes down to counting infinite sets. So now the question is, how do we count infinite sets? When counting finite sets, we have to be wary that the elements are one-to-one and onto. The definition of one-to-one is:

x, y A, x ≠ y f(x) ≠ f(y)

And the definition of onto is:

y B, x A, f(x) = y

A set is countable if and only if it can be defined as a list. For example:
n f(n)
0 → 0
1 → 2
2 → 4
3 → 6
4 → 8

Each item in f(n) corresponds to some element of ℕ.

We also talked a little about simple induction, which is actually something we would do in CSC236. Let’s say we have a function P, we know that P(0) is true, and we want to show that the truth transfers over to P(1), P(2), … P(n). In notation it would be,

n , P(n) P(n+1)

Then we believe that P is true for all natural numbers. 

 n  , P(n)

But not all functions will start with a base case where P(0) will be true. Instead some functions will start later for example P(0), P(1), and P(2) are false, but then P(3), P(4), P(5), and onward are all true. So then there are two cases we need to consider.

Suppose P(n) is a true predicate of the ℕ. If P
     1) Starts out true at P(0) or starts at P(k) where k  
     2) Truth of P transfers from each number to the next

And that is what we covered in the last few weeks of CSC165. I'm still practicing Big Oh, Big Omega, Big Theta, and halting problems. And I hope that I will be able to do well in Assignment 3. CSC165 has been very interesting and all computer science students who are going to take it next term and years later should definitely pay attention in this course. Thanks for reading!

Wednesday 19 November 2014

Hello and welcome to this week’s slog. This week will cover over week 9 and 10. Also, due to CDF being down this weekend, this week’s slog was a little delayed. In this slog I will talk about what has during these two weeks which mainly was about Big Oh and Big Omega. In my previous slog, I mentioned that I didn’t really understand Big Oh but after doing some examples in lectures, I sort of get how to do the proofs.

Basically from what I understand, when we want Big Oh, we are looking for the upper bound on the worst case running time for an algorithm for all inputs of any size. When we want to find if a function is in Big Oh of another function, in symbolic notation it would be:

f O(g)

What we are really saying, is:
C +, B , n , n ≥ B f(n) ≤ c(g(n))

What this means is that there exists a positive real number C, a natural number B, for every natural number n, such that if n is greater than or equal to B, then the function f, will be less than or equal to function g, multiplied by c. So since we are looking for the upper bound, we want to prove that the function f will be at most the function g or in other words, f is no worse than g in the worst case.

For Big Omega, it is slightly different where we are looking for the lower bound. When we want to find if a function is in Big Omega of another function, in symbolic notation it would be:
f Ω(g)

What we are really saying is:
C +, B , n , n ≥ B f(n) ≥ c(g(n))

Now the only difference between Big Oh and Big Omega is that the function f is at least as big as the function g or in other words, f is as bad as g in the worst case.

Okay, so let’s actually do an example.

Prove that 5n4 – 3n2 + 1 O(6n5 – 4n3 + 2n)

So this statement in symbolic notation would be:

C +, B , n , n ≥ B 5n4 – 3n2 + 1 ≤ c(6n5 – 4n3 + 2n)

So starting off, we need to pick a C and a B, but we don’t know what to pick yet, so we will have to do some work to figure it out. Also, how do we go about showing that the consequent is true? Let’s do some scratch work.

Let’s start by working on the left side

5n4 – 3n2 + 1 ≤ 5n4 + 1 # The sum is definitely greater than or equal to the difference
                     ≤ 5n4 + n4  # n ≥ 1, basically rewrite 1 as n4
                     ≤ 6n4 # add the two terms together
                     ≤ n . n4 # Now if we set n ≥ 6, we can pick B to be 6
                     ≤ n5 # Add the exponents

Now, what? Well, I think that’s the most we can do for the left side, how about do some work the right side?

6n5 – 4n3 + 2n ≥ 6n5 – 4n3  #Just like for the left side, having a sum on one side will be                                                     # greater than or equal to just the difference of two terms
                       ≥ 6n5 – 4n5 # - n5 ≤ - n3
                       ≥ 2n5 # Subtract the two terms
                       ≥ n5 # Drop the 2, as we know that it will be greater than n5 by itself

Okay now we can piece these two together and start to write our proof

Pick C = __1__ then C +
Pick B = __6__ then B
Assume n and n ≥ B # Universal quantifier assumption and antecedent
            Then 5n4 – 3n2 + 1 ≤ 5n4 + 1
                                           ≤ 5n4 + n4  # n ≥ 1
                                           ≤ 6n4
                                           ≤ n . n4 # n ≥ 6, so we can pick B to be 6
                                           ≤ n5
                                           ≤ 2n5
                                           ≤ 6n5 – 4n5
                                           ≤ 6n5 – 4n3 # - n5 ≤ - n3
                                           ≤ 6n5 – 4n3 + 2n # We can simply just chose C to be 1
Then n , n ≥ B 5n4 – 3n2 + 1 ≤ c(6n5 – 4n3 + 2n) #Introduce and
Then C +, B , n , n ≥ B 5n4 – 3n2 + 1 ≤ c(6n5 – 4n3 + 2n) #Introduce
Then 5n4 – 3n2 + 1 O(6n5 – 4n3 + 2n) #Conclude proof
The reason I chose this proof, which is a question from Tutorial #8, is because I was stuck at the part where we can’t do anything more to the left side of the consequent. I did not realize that we had to work backwards on the right side to get the two functions to match up. I decided to make this as an example to help explain Big Oh and also how to prove something in Big Oh.  

To show a proof where a function is not in Big Oh of another function, the symbolic notation would be:
f O(g)
Which says:
C +, B , n , n ≥ B f(n) > c(g(n))

This time, we assume that under every positive real number C, and every natural number B, there exists a natural number n, such that n is greater than or equal to B and the function f is greater than the function g multiplied by c. This means that the function f is not bounded above by g and that the function f is worse than the function g.

For a function not in Big Omega of another function, the symbolic notation would be:

f Ω (g)
Which says:
C +, B , n , n ≥ B f(n) < c(g(n))

So the function f is not bounded below by g and that the function f is better than the function g.


This is so far my understanding of Big Oh and Big Omega, as we get closer to the exam date, I hope to understand more of how do the proofs and the logic behind them as well. I hope by practicing through doing the questions on Assignment 3, I will be able to figure out how do them. Thanks for reading.

Sunday 2 November 2014

Hello and welcome! This slog will cover weeks 7-8. In week 7 we mainly focused on proofs by cases, interference and sorting algorithms. In week 8, we mainly focused on worst case, counting steps, and big-Oh.

So in week 7, the material covered was very familiar as it was just going over some of the things we have already been doing. Proofs by cases were nothing too new and was very easy to follow. For these proofs, the idea is that if you’re given a statement to prove, but you have to two or perhaps more parts to the statement, then you would organize the proof into sub proofs or cases. For example, the statement, “For every natural number n, the sum of n2 and n is even”, well natural numbers can be even or odd so we would split the proof into two cases where the first case is assuming n is an even number and the second case would be to assume that n is and odd number.

Next we talked about interference in our proofs like implication, universal, and existential introduction, as well as conjunction and disjunction introduction. These are the things that we already have used in our proofs where we either assume what we’re given or pick something that belongs in the specified domain. These “rules” have already been applied into our proofs that we've been doing so I won’t get into that much detail with them.

Lastly in week 7, we started talking about sorting algorithms. We talked about insertion sort, selection sort, and other sorts just as the Tim sort and merge sort. Basically the idea is which sorting method is the best for sorting out some cards in your hands. Well it actually depends on what cards you have in your hand, for example, numbers, colours, and suits and what how you want to sort them. I myself am not too familiar with each sorting method but it’s basically comparing perhaps one card in your hand with the card on the right or left of it and seeing which one is greater or smaller and then rearrange that card into the order that you want.

Then in week 8, we talked about counting steps with those sorting methods. Really counting the number of steps to sort something is really what we want to know. We talked about a python program where there were while loops, and we basically had to count the number of steps the program operated. For example,


When this function is called,  the first line runs only once, the while loop will run until the parameter is false, but then will run once more to jump out of the loop. The lines of code that are inside the while loop will run as many times as the while loop. Though the return statement will only operate when the If statement is true, which means it only runs once. After the whole while loop executes, the final return statement will run once as well. If the first return statement in the while loop executes, then the bottom return statement will be skipped.

In 165, we mainly focus on the worst case. I’m not too sure what worse case is and I’m still trying to wrap my head around big-Oh. I hope by next week I’ll figure it out through tutorials. We also talked about Upper Bound and Lower Bound where the worst case for the upper bound will be no worse than it and the worst case for the lower bound will be at least as bad as it. I really wish I could explain further into big-Oh but again I still don’t really understand the concept yet so by next slog I will try to talk more about big-Oh and the proofs for upper and lower bound.


Thanks for reading!

Sunday 19 October 2014

Ah it's been a while since I posted. I hope everyone had a nice Thanksgiving. It was my birthday last week so that was a bonus too. Before Thanksgiving we had a midterm so we only had 2 lectures that week and 2 lectures this week. In both weeks, we mainly covered proofs and methods.

The first method that was covered was proving the contrapositive. If for example we were told to prove that any natural number n, n­2 odd implies that n is odd. Then by contrapositive I would instead prove that any natural number n, n is even implies n2 is even. The usefulness of this is that proving the contrapositive is sometimes easier, and since the contrapositive is logically the same as the original statement, proving the contrapositive will prove the original statement.

The next method is proving existence where it is sufficient to show that the set is non-empty and there is one element. For example, prove

∃ x ∈ R, x3 + 3x2 – 4x = 12

For this proof, we would simply pick an x and show that it works. In this case, if we pick x = 2 which is a typical real number and plug it into the equation, the result is 12. Thus we exhibited one element and the proof is done.

Next is proving a contradiction. If we were to prove the given statement:

n ∈ N, |P| > n

Where P is a prime number, it would be difficult to prove that a prime number is greater than every natural number, in fact impossible. Instead if we prove the contradiction of the statement

∃ n ∈ N, |P| <= n


Then it would be a lot easier to prove that a prime number is less than or equal to some natural number.

Assume e ∈ R
          Pick d = _______ Then d ∈ R+ #We are going to leave this blank for now and
                                                            # do some work to find d
          Assume x ∈ R # generic x
                   Assume |x-3| < d
                             Then |x2 – 32| = (|x-3| * |x+3|) = |x-3| * |x+3|
                             Then let d <= 1
                             Then |x-3| <= 1 (2 <= x <= 4) #bounded d to be less than or
                                                                                 #equal to 1 and found the interval
                             Then x+3 < 4+3 = 7 #We bounded d to get rid of the (x+3)
                             Then |x2 – 32| < 7 * |x-3| < e
                             Then |x-3| < e/7 #Which now we will choose d = min(1, e/7)
                             So |x2 – 32| < 7 * |x-3| < 7 * d < 7 * e/7 = e
                   So |x-3| < d |x2 – 32| < e #Assumed antecedent, got consequent
So ∃ d ∈ R+, x ∈ R, |x-3| < d |x2 – 32| < e
Then e ∈ R+, ∃ d ∈ R+, x ∈ R, |x-3| < d |x2 – 32| < e #Since assumed e ∈ R

This is my version of this proof which shows that for every e, there exists a d for every x such that the antecedent implies the consequent. 

That's it for this week! Thanks for reading.

Sunday 5 October 2014

The week is finally over, which means it is time to write another slog! To be quite honest I almost forgot to write it due to all the assignments that I had going on. But now that I have finished all of them, I can focus on writing my slog for this week.

This week in CSC165, nothing was too difficult to understand which was nice, in fact most of the things we did this week was more or less review for my MAT137 course because most of the material we are learning is sort of the same as the material in CSC165. This week we mainly took a look at proofs and the structure of proofs. For me, I have been struggling with proofs in MAT137 so I'm not too confident about my proving skills yet. I hope that through more practice I can master proofs! 

In our lectures, we covered what a proof would look like, first we talked about what we need to know in order to start our proofs. The first thing was that we needed to understand why what we believe is true. This means we have to make our claim about what we know, introduce it, and then strengthen the weak parts of our belief. Next we needed to show what we believe is true. Basically we have to get into the meat of the proof and explain/ justify our claims to convince a skeptical peer.


After that, we looked at the outline of a proof, in other words, how it should look like on paper. The way that proofs are done in CSC165 are a little different than how we do them in MAT137, but it still helps to understand the structure of proofs. The following outline is what we did in lecture.

The proof starts off by assuming the generic part of the statement, in this case, we are assuming for all x's. Then we assume the antecedent. Notice that the structure has been indented once. This is basically to show that this is under the the previous assumption of all x's. Next we indent again, now we are in the meat of the proof. This is where we begin to explain/ justify our claim with results and examples. Each step is subsequent to the previous step, and this goes until we show the result that we need. Once the final result is achieved, we jump back in indentation to where it lines up with assumption of the antecedent. Here we say that the antecedent implies the consequent because we got the consequent result that we wanted. We then jump back in indentation again and conclude that for all x's, the antecedent does imply the consequent.

Now that the structure of the proof has been covered, we took a look at an example, 
In symbol form it would be: ∀ (x,y) ∈ R+, x > y ⇒ (xy) < (x+y)/2
I'm not entirely sure how to prove this statement, but the structure would look something like this.

First we assume the generic part of the statement, we assume that that x and y are real numbers. We then indent and assume the antecedent, which is that x is greater than y. Next we indent again and show the actual proof to show that the square root of x multiplied by y is less than x plus y all over 2. Once we get the right result we need, we jump back in indentation and say that since I assumed that x is greater than y and showed that the square root of x multiplied by y is less than plus all over 2, we introduce the implication that the antecedent implies the consequent. Lastly we jump back in indentation again and conclude that for every real number (x, y), this implication works. 

So this week's slog was definitely a bit more interesting now that we are talking about proofs. I even included some math symbols. I hope that perhaps by next week or the week after, I will be able to start proving questions and share them on my slog! That's all for this week so thanks for reading!

Friday 26 September 2014

This week in CSC165, we started off with conjunctions, disjunctions, negation, and truth tables. For me this stuff was mostly understandable but there were times when I was just completely confused. I hope in this slog I will be able to elaborate more on what I was confused on.

So getting right into it, I mainly understood the material that was taught this week but what confused me was the part about tautology, satisfiable, and unsatisfiable. I understood that tautology is where a statement is completely true in every possible world, in other words where every example works. Satisfiable is where it is possible to make a statement true and have at least one example and unsatisfiable is when the statement is never true, meaning there are no examples that can satisfy the statement. What was confusing for me was the situation that was given on Wednesday's lecture. After looking at it again, I understand it a bit more but it is still quite confusing with the sets. On the other hand, the satisfiable and unsatisfiable situations ares easier to understand because the situations are more straight forward. For example the unsatisfiable situation says that there is a P(x) and not P(x) which is not satisfiable at all.

On another note, this week's tutorial was quite challenging. For the first question in the tutorial exercise I was confused about the first question because I did not really know how to convert the statements into symbolic form but it was actually fairly simple for example one of the questions was "No course is a prerequisite of itself", I understood that I was dealing with one course instead of two courses this time, also since it said No course, I immediately thought that it must be for all courses this is true. This then was actually pretty similar to part A in question one instead the course that does not have a prerequisite is itself so that means we would replace CSC108 with the prerequisite course. Most of the questions were pretty easy as I would get the right symbolic form but just in the wrong order. For example one of the questions was "Some course is not a prerequisite for any course", if we said "For every course, there exists a prerequisite course" then it would be wrong because we are not specifying one specific prerequisite course. Instead we would say "There exists a prerequisite course for every course" to specify one specific prerequisite course. Therefore sometimes order matters.

And this concludes this week's slog. It was a bit boring for me to write but I hope next week's will be better. Thanks for reading!