Saturday 29 March 2014

Week 10

This is it! The final hurrah! In my last (thankfully) Slog post, I will be doing essentially what amounts to a recap, of everything that I have done in the course so far.

I started out this course, having learnt only the bare minimum of programming. I knew how to make lists, knew the basics of unit testing-but beyond that I was a mess. I hadn't scored a very high mark in 108, but I've begun to change that around.

In 148, we started off by learning all about OOP, which was the foundation for essentially everything else that we had done in the course up to this point. OOP is amazing,  and it really lends itself to a lot of situations in which the variables, and the tools that you are programming with can be applied.

Next, we learnt an assortment of other python techniques, such as errors and namespaces. These little techniques I really found interesting, since they are the foundation of the python language itself, so it was cool learning about these things.

Following this, we spent the rest of the course on recursion and trees (including linked lists). This is where the material in the course became extremely difficult for me, as I had trouble picturing a lot of the material in my head. Luckily, I had a super awesome TA, who guided me through this process-talked me through labs and give me extra help through the course.

As always, for the last time, I would to mention a post on a fellow classmates blog: http://47tulips.wordpress.com/. I really felt that this student put in a ton of work in this slog, and really communicated to me on a basic level how programming works. I commend him for his effort.

That's all for now!

Am I finally a pythonista by now!?

Week 9

This week, I would like to talk about sorting algorithms. Now, if there was one thing that I had learnt about these sorting patterns we could choose, is that we should NEVER use bogo sort, since apparently my partner and I learned about during the lab-it would take literally forever to sort even a list of 10 elements. 

In this Slog, I would like to touch upon, the major sorts that we had learnt this week-merge sort, quick sort and selection sort. Instead of talking about how each of them specifically works (which would be a waste of time), I will instead talk about their efficiency. You see, the key part in any type of sorting pattern, is to do it in the quickest amount of time possible. In theory, that time is constant time, that is every list can be sorted in 1 second, regardless of size. Unfortunately, that is not always possible. In order to measure the time it takes, we use the notation big Oh or O(n) , which is used to show how effectively the pattern can sort a list. When we have complexity of big O, like lgn(O) or O(n), that is considered good, and shows that this pattern is efficient at sorting a list.

Starting off with selection sort
Selection sort improves on bubble sort considering the number of passes, it swaps the largest in the list with the smallest consistently, however, the runtime is still O(n**2) and so is the runtime of insertion sort at the worst case which inserts a value in its correct place from beginning to end. 

Shell sort is also O(n**2) at worst - it splits a list into sublists and sorts the sublists before piecing the list together again. Merge sort is a recursive algorithm which constantly splits a list in half and sorts smaller lists before rearranging the split list together with a worst case runtime of O(nlogn).

 Quicksort improves upon merge sort because it does not use additional storage, it uses a pivot point, a value that is used to split the list and then calls itself recursively to sort smaller lists. Quicksort is on average O(nlogn) and faster than most O(nlogn) algorithms although, the worst case runtime, O(n**2), is rare. 

There is also Timsort (O(nlogn)), a hybrid sorting algorithm that merges insertion sort and merge sort. You can also measure the best case runtime (represented by big omega) and the average case runtime (represented by big theta). However, if someone needs a sorting algorithm to sort a lot of information at once or in various different orders, then an algorithm is only as good as its worst case. 

And there we have it!

Saturday 22 March 2014

Week 8

This week, I will be talking about "linked lists". I used to hate linked lists (and recursion in general), but now I finally realize how useful they can actually be.

Linked lists basically work on the principle, that you have an element, and that element is connected in a fashion to the other parts of the list, which in terms is connected (linked) to other parts of the list. Essentially, what we are doing is making a recursive definition for a linked list.

The true purpose of a linked list comes in to play, when we consider how a programmer would add and remove elements from a list. The way in which this happens with a traditional list is through shifting all the elements and then inserting the element that we require at the place in the list that we require. When we do this with a linked list however, we see that we don't have to worry about the references to the rest of the list, and we can simply only focus on the parts of the list that we are inserting/deleting items from.

I had some practise this week with linked lists- in particular, with how a linked list can be used to adapt to various functions. I did have difficulty with the lab #6, that focused on this particular topic, however by using the TA's suggestions of writing out the linked lists using diagrams and pictures, and thinking of the topic in a recursive fashion, I was a master at this in no time at all.

I would like to make a comment on one other blog this week- from http://adauoftcsc148.blogspot.ca/. He talked about the manner in which we view a linked list as a "head" and a "tail", which is another way to visualize these types of lists. I enjoyed learning about this representation since it seems very intuitive to me, how to create a linked list, by just worrying about this two parts,and how they are connected through our coding of the recursive linked nodes in the list.

Thursday 27 February 2014

Week 6 (Extended)-Test Studying, Trees

*Since my class lecture time is on the Wednesday nights-this weeks (or rather last weeks) blog post is slightly late. I will be doing another post by next Wednesday in time for my next lecture. My apologies!


For this weeks blog post, I will be discussing two main things-my approach to learning how "trees" work, as well as my approach-and opinions on the test #1 that just occurred last night. On to trees. python trees, not unsurprisingly are quite dissimilar to actual trees-the only similarity between these two are  their "basic" components; leafs, nodes (branches), roots. Because python is not a graphic program, its impossible to actually draw trees-instead python uses their implementation of "lists" and nests a list, depending on how far down the path of the root to the leafs of a trees this list is. This was an extraordinary technique, because using recursion, you could in a sense-cycle down through the tree, into the "subtrees" till you reach your base case - a leaf that contains a piece of data. My one struggle with this idea, is the code we used to implement the tree, makes conceptual sense-although I still have trouble understanding the concept of a linked list (which was evident by my struggles in lab #6)

I also completed my first test yesterday! In 108, I always felt that the tests were a components of the course I struggled with-luckily I figured out my flaws, and better ways to study. In order to study-I made sure I knew my notes in depth, knew the important data structures and most importantly, understood how to trace all these new coding techniques. When it came to the test, I felt I did well-although I did make some mistakes I shouldn't have. For the first question, I felt the recursive tracing went fairly well, although it took me a while to understand at first, so I was unable to finish the last two calls in time to finish it. The second question, was similar to E2a, a very basic function to raise errors if certain conditions were met. The third question, also was quite similar to the lecture notes, a basic implementation of __contains__ that I had luckily included on my cheat sheet. The very last question, was a stack implementation that was a base class, that I understood-but unfortunately forgot that sum() doesn't work on a new ADT-I had tried to use the technique to create a temp stack, but couldn't quite figure out how to record the values in the stack. Overall, a decent result but I could be doing better.

Yet again, I would like to highlight the important posts my classmates have made in this course, firstly a post by http://achievementsuperlog.blogspot.ca/ on "Trees eh". He touched upon a very important concept of a tree, known as "traversal", which is how to move from leaf to leaf. In class we learned various techniques (post,pre etc order) in order to accomplish this, and reading his post, gave me the reminder to review those techniques myself.

Another important post I would like to draw attention to is http://aheapofpythons.wordpress.com/ on "Trees" again. This post discussed how we had actually been exposed to trees in lab #5-something that even I hadn't realized myself, since in the last lab, we essentially had a tree, we were recursively traversing to see if a number existed in it (sort of like the __contains__ method).

Thats all for now! Back to studying for my other midterms!

Sunday 16 February 2014

Week 5

Once again sorry for my late post! I've been busy working away on my TOAH model and the solution to the 4 stool problem-pretty much addicted to cheese at this point-but its now finally done!

For this weeks blog, I will be discussing to main things; a recap on recursion, as well as a look inside my own problem solving process for A1, which is due very soon. Firstly-recursion.

Recursion, at first seemed to me like a very similar concept to "loops" with for and while loops, doing something over and over again until a certain condition has been met, however after completing lab #4, and going through all the recursion exercises they gave, I finally completely understand this technique. In particular, solving the backwards writing exercise, really helped to solidify my understanding of the base case, and how recursion uses the base case, and then "minimizes" it a bit in unique ways, in order to properly go through the recursive procedure, and keep going until a certain condition has been met.

The other important project, that I was at hard with work this week, was A1. A1 without sounding too overly dramatic-was probably one of the toughest assignments I have had to do in university. Each step, tested my dexterity and ingenuity in python, and required me to know the topics covered in the assignment, like the back off my hand. By the end of the assignment, I finally understood recursion, and how to go "up and down" the recursion tree, how to construct the proper classes, and the right syntax to use in all these situations. One of the trickier parts of the assignment for me was, definitely my implementation of the ConsoleController. I initially had a lot of difficulty with this step, as I was unsure as how to call the proper methods, and use the right implementation of actually asking the player to input moves. After some thinking with my partner, we ended up deciding to let the player input moves, like a tuple- in order to select destination and origin stools for cheese-and raise an illegal move error whenever it was needed to do so.

For this weeks blog post, I would again like to highlight two other blog posts, that I found to be particularly compelling.

 Firstly, the most recent blog post by http://albertcalzarettocs.blogspot.ca/. In his post, Albert describes his difficulty with tracing the recursion tree up and down for the final step of A1-a problem me and partner faced as well. One piece of advice I could try to offer him, would be too always start with the base case (3 stools) and then see how to extend that case in the most simplistic way possible. That method worked for my partner and I, although it did take a inordinate amount of time to solve the problem in the send.

Secondly, I would like to highlight the most recent post by http://bounce14.wordpress.com/, where the author of the blog discusses the use of tracing recursion. This again relates both to the other post I commented on above, but also relates to A1. In his post, he describes his method to trace these functions, and suggests using a python visualizer in order to trace these steps-something I may start using in the future, to solidify my understanding of this topic.

That's all for now!
Thanks.


Wednesday 5 February 2014

Week 4

Sorry for the delay all! Been away all weekend at a debate tournament with no wifi, so I was unable to update my blog in time.

This week I will be talking about "exceptions" and when to "expect" them (some funny python humour). Exceptions are something I was very familiar with in python-due to the amount of syntax errors I have made in the past. Exceptions are something to are in a sense "ingrained" in the programming language of python-but they only appear when you make a mistake, or do something to PYTHON considers is invalid to run your code. That begs the question though-what happens when you wish to call exceptions yourself, even if Python believes your code is valid in terms of its syntax and how it runs? This is where the idea of hard coding in exceptions comes from. To code an exception, we use our knowledge from OOP, and code them as a class, and inherit them from other exceptions. By this I mean if we were to hard code in some exceptions, we would inherit our new class exceptions from other ones, such as the general python "Exception".

Over the past week, I have had some experience dealing with exceptions, and understanding how they work. The lab this week really emphasized my understanding of this topic, as I was able to, through coding the "travel_to_new_pos" function, understand when and how the exception the needed to be raised in certain instances. Furthermore, throughout this lab- I was able to understand the exceptions even more, when it came to implementing the NotImplementedError, which I realized is extremely helpful, when we have method calls in 2 different classes, and we wish to show that we can't seamlessly merge them together.

For my blog this week, I would like to call to attention, to two different blogs, and comment upon them.

Firstly- http://alexcsc148slog.blogspot.ca/. In it, he said he had trouble with figuring out the solution to the second lab, for shifting the list, without increasing the complexity of the overall program. As a hint to offer him, I would suggest he consider looking at the list with "nodes", and looking up the concept of a linked list-where the first and the last members of a list are what is remembered and stored by the computer. I am sure he will get it in due time.

Secondly, I wish to comment upon the blog http://csc148mn.blogspot.ca/. In it, he describes (in my opinion), the fantastic learning method, and steps he took to understand exceptions, and how the try/except clauses work, when you look for exceptions in your code. I really enjoyed his post, since he talked about a lot of the main issues I faced, when learning this concept myself.

That's all for now!


Monday 27 January 2014

Object Orientated Programming (OOP)

Hello fellow students, TA's and  instructors. From this blog, you will be able to monitor my progress throughout this course- - from my trials and tribulations, to my accomplishments and all those pesky syntax errors in between. Each week (maybe more depending on time constraints), I will be updating this blog, with a recap of the material I have learnt in CSC148, along with my own personal reflection on the material. Hopefully, throughout this process I will gain valuable insight into my own problem solving strategies, and come out of this course a stronger, more inventive, and more independent programmer.

To start off my blog, I will be talking about my assigned topic for week 3 of this course, "Object Orientating Programming", or OOP for short. Having already been introduced to this programming technique in CSC108 briefly, I have had some experiences with it, but in that course, the depth of the material was quite light, and we mainly covered it as a small precursor, for those who intended to take 148 in the winter. As a result, when I was introduced to this concept for a second time, at the start of 148, the content was far more rigorous and challenging then I had been expecting. As a brief recap, OOP is, as the name suggests, a style of programming which revolves around programming for specific objects. At first, I wondered why is this type of programming so helpful? What's the point of creating specific date types and methods and classes, to program for specific objects? This idea frustrated me at first, until I did some background reading and realized how essential this type of programming could be. It wasn't until, I slowly worked through the examples in class and came up with the analogy below, that I truly understand the powerful nature of OOP.

As an analogy to show how I understand OOP works, let us say we have an object like a hockey stick, a hockey player would use for a hockey game. Obviously the hockey stick would have to be unique in some fashion and it would have to have certain characteristics, that make it some individual persons unique stick. These characteristics (or attributes), could include; the model of the stick, colour, weight and length just to name a few. Now lets say, we wanted to code a program that would have certain functions, that when we take the characteristics above, give us certain results. For instance, if we used that particular stick and we defined a certain function to swing the stick in a parabolic arc, we would get a specific trajectory. This can all be done, using the conventional style of programming, where we have a "procedural design". However, issues soon arise with this style of programming, if we wanted to change the hockey stick in some ways, and change its attributes. What if the stick becomes more and more worn out, every time we shoot it, what if the colour of the stick slowly fades, what if the weight of the stick increases when we play due to ice buildup on the hockey rink? Does this mean every time we wanted a new stick, with these specific characteristics, each time we wanted a new stick, we would have to create a completely new stick - all the way from scratch. Furthermore, since we have functions based around the original hockey sticks attributes, Python wouldn't allow us to change the stick, and have the functions to adapt to these changes as they go. It would be far more simply, to start off with creating and coding  a basic hockey stick, and then to add code to it along the way, when we needed to change it around and let the functions change with it- exactly what OOP allows us to do.

Another important aspect of classes that was discussed, was the concept of "inheritance", and how two classes can inherit properties from one another. For instance if we had two classes, you may be wondering-how can we get stuff from one class to another? How can our methods, which sometimes may be helpful in more than one method, to be transferred to another one. This concept, actual struck me as something that may be strangle intuitive with the language, since a coding technique like this would allow me to seamlessly mix my classes into one with its functions. The way I personally picture inheritance to work, is if I have a "parent" and a "child", then the child obviously inherits some "things" from its parents-the same way with with parent classes. In python coding, you would write this as class child(parent), where the child "inherits" certain things from the class parent. This is also important I realized, since when you do this you can specify certain things in a child class, that you only have to write generally in a parent class. However, when it came to the manner in which inheritance operates, I was completely unsure as to how it worked. Essentially I realized it boils down to three main uses: alteration, implying, overriding. The first use, to imply a function in the parent class but not in the child block allows for you to inherit everything from the previous parent class-with only a pass statement needed. The second use, over riding, is a very special and I realized, helpful case to consider when using inheritance. What if you want to inherit something sometimes, but not all the time? In this case, you would have to override the code in the parent class. This is done through simply defining the same function in the parent class, as you had in the childs class, and letting python understand that the changes in the function, are for it to "over ride" what was used in the parents class. The very last use of inheritance is with altering a code before or after a parent class runs. Do to this, you simply use the example of overriding previously talked about and then use this new functions called super(), to then call the original version of the parent class. This concept, initially confused me, as it seemed initially strange to go back and forth between altered and changed versions of classes, but I realized that super(), did help to clear up this confusion, as python makes these changes for you itself, and all you are required to do as the programmer is specify which class (at which instance of time), you need to use when calling a specific function. Another alternative to inheritance, is to use composition- a technique by which we use classes and other modules, rather than using inheritance to relate classes together.

My personal take on OOP, and my analysis on the material this week is brief, but I feel its important as it gives my own take on the material. OOP seems to be quite the helpful tool in computer programming, with its unique benefits, especially when it comes to code that is repeated quite often, the the objects are always changing-yet you want to apply the same functions to them. Secondly, when it comes to the idea of inheritance, this technique seems like an ideal fashion for creating "hierarchies" of sorts within the classes, and ensuring that each class is built upon another. The problem with this is over time, with the super() function, it will become confusing to understand where in the hierarchy you actually are, so it may be far more simply to use composition, and define the classes as I go.

Some problems that I faced this week regarding learning about OOP in python, include struggles as basic as grasping the concept surrounding OOP, to coding the exercise, given in the second week of the course. In terms of the concepts surrounding OOP, my analogy, and working through the codes in class really helped me to understand the material in far greater depth. In particular, the syntax surrounding classes, with self.() and these other new coding techniques were only learn over time. Secondly, over time I became more fluid with my coding of classes, understanding what the class was referring to, how to initialize them, and how to use inheritance to relate them together. When it came time to complete my exercise for part B, I tore through it quickly-I initially only had difficulty understand when is it appropriate to use the self.() notation, and how to create the instance variables at the smart. Once I did, and realized that the self.() notation, is only used when referring to the class object itself, and that instance variables are stored at the start, when we initialize a class-OOP became a breeze for me.

That's all for now!