Sunday 30 March 2014

Sorting and Efficiency

Sometimes, after returning a list of some kind from my code, I find myself wanting and/or needing to clean it up, so that it is properly ordered. In order to do this, I might employ the use of a sorting method, which will allow for me to sort the lists into the order of my choosing. There are many different types of sorts to choose from, that all work in different ways; bubble sort, selection sort, insertion sort, shell sort, merge sort, and quick sort. Bubble sort, as the name implies, works by "bubbling" an item in the list, and comparing it to an adjacent item, over and over again, until it finds its proper placement in the list. Selection sort works by selecting the largest item (in the unsorted part) in a list each pass, and then finding its proper location by the end of that single pass. Insertion sort works by essentially creating a sublist that starts with the first item of the unsorted list, that grows with each pass by placing the next item in the unsorted list into the sorted sublist in its proper order, and by the end, the sublist has become the new sorted list. Shell sort is essentially the same as insertion sort, but rather than using the entire list, it makes sublists, which are then sorted by insertion sort. Merge sort works by breaking a list down recursively into smaller and smaller parts, which are then compared, and reconstructed in the proper order into a new fully organized list. Finally, quick sort works by essentially choosing a point in a list at which the values pivot, meaning that the values around the pivot point will swap into the proper order, until the list has become properly sorted.

The different sorts take different amount of time to complete, depending on the type of sort and the length of the list that it is affecting. The way that the amount of time, or efficiency, is represented is by using the big-O notation, which will change depending on the length of n, which is the list. For example, insertion sort works in a O(n^2) run time, meaning that the run time grows proportionately with the length of the list squared. Another example would be merge sort, which works in a O(n log n) run time, which is just the O(log n) multiplied by the length of the list. Something of O(log n) has a run time that gets shorter and shorter for each iteration, and so O(n log n) would just be that multiplied by n. This is useful for larger lists, as it is capable of dealing with them with little difficulty, due to the shortening of run time, whereas something with O(n^2) run time will grow much larger and larger the bigger the list. 

Sunday 23 March 2014

Week 10

This past week me and my partners found that we were having some problems with A2 part 2, as some of the more specific parts of the code were providing difficulty to get just right. This was mainly the dot and bar regex part of the code that was being problematic, as we were unable to make it work with the proper parameters. However, after some trial and error, we were able to make it do what it was supposed to. But that's not what actually proved to be the most interesting part of the assignment; I found that the best part was editing the code after it worked to make it shorter and more concise. We found that there were several instances of repetition of exactly the same code, due to the fact that most of the regexs required a similar format. Because of this, we were able to combine these bits of repetition into a single pathway, that would then differentiate between the different types. We also managed to shorten our code by several lines in each part of the code.

This past week also came with another problem, and that was in the lab of this week. My partner and I could not at all make any headway with the code, despite the fact that it was relatively similar to the work, at least in theory, done for exercise 3. At the beginning of the lab, I thought to myself that it would probably be a piece of cake, because of the similarity to the exercise 3 code, but we soon found that it was not the case. All we could accomplish was to make it, in the case of the inorder traversal, was make it order only specific cases, and not at all like was shown in the written examples. We were already more than an hour in by the time we decided that it was no use for the first part, and so we moved on to the second. However, we once again found ourselves stumped, and so we reached the end of the lab without accomplishing anything.

Sunday 16 March 2014

Overcoming difficulties with code

Something that I found really challenged me this week was actually exercise 3. Up until now, the exercises had seemed a lot simpler and easier to complete, but I found myself having some trouble dealing with the third. While part A was was fairly easy, and only required a little tweaking by moving certain lines around and renaming some things, I found that part B was much harder to tackle. The reason for this is that I could not for the life of me find a way to both find the deepest path, and to add up the values of each node along that path. I was able to make it so that it could find the deepest path fairly easily, but all that it would do was return the depth in a number form, and not the actual values added up. I was also able to make a separate piece of code that would take the values of each node, but not add them up, and not take account of the path depth. Finally however, I was able to come to the realization that I could fix all of my problems by slightly altering my code that found the longest path. I merely changed the first part of the code, where I added the root at the beginning, so that it would recursively seek that in smaller and smaller levels down the path, while also finding the longest path with the greatest sum of values.

Last week, I also did something that I had not done before, and that was take part in the after lecture discussions with the prof and other students who had questions. This was back when A2 part 1 was being discussed (I missed a week of the blog, so I was unable to talk about this up until now), and we were discussing the different ways that the parts of the code could be implemented. The memories as to what exactly was being discussed during the time I was there are lost to me now, but I still remember how the discussion process made many concepts a lot easier to grasp. With each student and the professor bringing different viewpoints to the field, the different aspects of the code were broken down into much simpler pieces, allowing for an easier understanding of what they were meant to do, and how they could best be implemented. I did not stay for the entirety of the time, as I left after maybe five or ten minutes, but I feel that I took a lot away from the experience, and if I find that I am having trouble with any work in the future, I think I know a new place to turn to.

Sunday 2 March 2014

Recursion (again)

Here we go again with recursion. This seems to be a bit of a recurring (see what I did there?) topic, as it should be, because of its prominence and importance in Python programming. Recursion is, at it's core, a very simple concept in computer science. It is code that calls upon itself inside of the written code repeatedly (or defining something in terms of itself), iterating on smaller and smaller levels of itself, in order to successfully complete what the code was written to do. It is written without loops, such as the "for" and "while" loops, and since it can be carried on infinitely smaller and smaller levels, that allows for it to tackle some very large problems. Recursion code also tends to be much shorter and less complex than most other forms of code, and so that allows for much simpler code that is easier to navigate. It is also capable of dealing with things that create smaller instances of themselves (recursive data structures), like tuples within tuples for example, in a nice and simple fashion.

While "for" loops and "while" loops and other common forms of coding are incredibly useful for programming, and capable of a great many things, some thing are just made so much easier through the use of recursion. Recursion is an incredibly useful form of programming, as it is capable of a great many things, like mentioned before. Due to its capability to carry out the code infinitely, it can deal with some problems with multiple levels much faster and more efficiently than any kind of loop might be able to. Because of this, programmers may find this kind of code better for problems that require a great many iterations in order to be completed. This is one of the major reasons for which this type of programming is so desirable to be used. People always desire efficiency, especially when it is simple, as it is with recursion. It is also a valuable form of programming to know, as it can keep written code rather short, allowing for pleasant style. This elegant shortness also allows for simpler error searching if any problems arise, and so it makes the life of any programmer much easier when it can be used. Ideally, in my opinion, when recursion can be used, it should be. 

Sunday 16 February 2014

Trees

While the material covered this week in lecture wasn't entirely new, the way in which it was described and dealt with was. I had come across it during a lab several weeks ago, in which we dealt lists within lists with a series of numbers and "None" placed within them; I am of course referring to "trees". It was difficult to deal with then, and so I was fortunate that my lab partner that week had a better grasp on the idea than I did. The lecture this week did shed some light on the subject though, and so I am finding it a little easier to grasp. However, despite the added help, I am still finding the whole thing a little tough to tackle, especially as far as writing the code goes.

As far as the concept goes though, I am understanding what trees are. The diagrams provided in the slides and during the lecture helped me understand how they work, as did the definitions and key words associated with binary "trees". And, through this I have managed to learn a little more about recursion, and how the various parts call upon one another. I of course knew beforehand what recursion was, and how to write the code for it, but the visual help aided in cementing the idea in my mind as to what recursion really is.

Sunday 9 February 2014

Recursion

I must say that, upon first being introduced to recursion, I was not entirely sure as to what its significance was. Using "for loops" and "while loops" had always worked for me, and so I had seen no need for anything else. It was also somewhat difficult at first for me to grasp recursion, despite its simplicity, and so that proved to be another deterrent. However, upon further inspection of examples, and my own experiences with attempts to use it, I found that it became quite handy. Its use of short and usually simple code made it desirable for use instead of a "for loop" or a "while loop", which would sometimes become quite long and redundant. I also found that it proves to be much more versatile in its uses, as it has the potential to perform iterations many more times with fewer steps than any of the loops, while also being capable of doing more than a loop with an equivalent or even longer length of code.

This week's biggest problem, though, came from the lab work. My partner and I were having extreme difficulties with the finalization of our first recursion code, and we could not tell at all where the problem was originating from. Therefore, we decided to move on to the freeze code, which we finished relatively quickly, in order to clear our minds of the first code. We ended up staying at the lab a good half hour more than we were meant to in order to finish the code and determine the source of the problem. We were literally at our wits end, until finally, almost as a joke, we decided to add "return" to the end of our code, thinking that it couldn't possibly be the source of our misery. Lo and behold though, our code worked perfectly after this, and we laughed at ourselves for missing such a simple, but important problem.  

Sunday 2 February 2014

Exceptions and raising errors

My work in computer science over the past week made me much more familiar with Exceptions and raising error messages, due to the lecture material, the class exercise, and the lab. Prior to this, I did not have any experience with these things, and so this experience opened my eyes. The irony of course was that, even though this was an entirely new thing to me, I found that I was able to grasp most of the concepts rather quickly, thanks to the slides and other links posted on the class site. I actually found it rather fun to work with the different errors and error messages, as they provided and interesting look into some of the most crucial construction blocks of a good bit of programming.
I did, however, have a short moment of difficulty when attempting to use the "try" method. At first, I was unaware of how the "except" part of "try" worked, and so I would simply "try" a method, without implementing an "except". Of course this created some errors, and it took me a little while of searching the sources provided to finally grasp why the errors were showing up, and why "except" was needed. After learning that it was needed though, and that it worked sort of like an "if" statement in terms of its construction, I was able to use it properly, and finish the code that I was working on.