Hello there!
My semester exams are here, and it has been quite a lot of time since I posted my last blog post. I'll be back with a lot of interesting posts from the second week of June onwards. Do stay tuned!
To GOTO or to not GOTO?
Most of the languages in use today, do indeed have the goto statement. As we do already know, the goto statement is used to unconditionally jump from place to place within a program, with destinations marked as labels. As you can already guess, this can make a program really hard to read, with its particularly jumpy nature. The programmer, with all his wits, could well have written a perfectly unambiguous program, but what about his code? Sure ; he knows all about his own program, and is in a position to debug it and make changes, but what about debugging, when it comes to a team environment? While our smart programmer is on leave in Hawaii, rewarded for his hard work, how miserable life will be for his coworkers!
Oh, and did I mention the program had 352 goto statements?
In the days of the structured programmer, the goto statement was a very powerful tool in the hands of a very powerful programmer. His days, however, are no more. Neither him nor his tool are powerful anymore. However, it would be wrong to say that the goto statement does not have any application in high level programming. While some high level languages, like Java do not support the statement, other languages such as C#.NET continue to allow it ; and it proves to be useful in causing fall-throughs in switch-case blocks.
The popular saying among many, so-called high level programmers is, "Never use GOTO statements" ; but this need not be completely true. However, there have been several great computer scientists, whose work has formed the base for what most of computer science is today, have sternly voiced their concerns against the goto statement. The most popular of such cases is Edsger W. Dijkstra's letter to Communications of the Assosciation for Computing Machinery in 1968, titled A case against the go to statement. (here)
Since a number of years I am familiar with the observation that the quality of programmers is a decreasing function of the density of go to statements in the programs they produce. Later I discovered why the use of the go to statement has such disastrous effects and did I become convinced that the go to statement should be abolished from all "higher level" programming languages (i.e. everything except –perhaps- plain machine code).
In reality, goto statements belong to assembly language code, where the program, has no other option, but to jump (conditionally or unconditionally) to memory locations, specified explicitly in the program itself. When structural programming was programmer fashion, the goto statement, as I already mentioned was a very powerful tool; but this was before the while and for loop even came into existence. Now, with better options at hand which were not available at earlier times, we must strive to do whatever we can to make our programs easier to manage, read and debug - we must strive to make our programs more high-level.
The go to statement as it stands is just too primitive, it is too much an invitation to make a mess of one's program.
The use of the word mess, in my opinion is quite apt. Imagine a program with 352 goto statements. You're probably going to spend more than half of your office hours scrolling up and down, trying to locate the labels, and make sense of what the program actually does. (And I'd sincerely hope you're not using the vi text editor.) Owing to this seemingly random flow of control between various statements, programs that use too many goto statements earn the "Spaghetti Code" nickname, which falls very short of a complement.
So why would you use a goto statement anyway? It is also a proven fact that every goto statement can be replaced with a suitable implementation of a set of looping constructs. (except for the fall through case in switch blocks I mentioned earlier). The reason for this is probably, that most programmers think (or like to think) in the way a computer thinks; executing a set of statements, and jumping to a new set of statements when a branching instruction is encountered. In a way, this is what dividing your programs into methods/functions also achieves, but it is much easier to manage code.
So, to conclude, to not go to is a much more sensible decision.
Is the coffee cup getting colder? (Part 2)
So, in this series of tests, .NET seems to be the clear winner so far. If this makes you really happy, you may be in for a disappointment today. I unveil now, the results of two more tests that we performed recently.
TEST 3
DRAWING A HELL LOT OF STUFF
--------------------------------------------
In this test, we had both our competitors draw, as the title says, a hell lot of stuff onto a single window. Apparently one of them had a lot of fun doing the same, while the other got extremely stressed out. The task was to draw 1000 rectangles and 1000 circles. Now just in case you're thinking whether life would've been simpler had we drawn 2 rectangles and 2 circles, you're wrong. The time needed for either platform to draw any single shape is a lot less than 1 millisecond. To measure enough time so that comparing is possible, we've overloaded them with a heavily time consuming task.
Let us, as usual, have a quick glance at the code before moving over to the results.
Java program (extract)
(not available at the moment)
C#.NET program (extract)
static void F1_Paint(object sender, PaintEventArgs e)
{
g = e.Graphics;
Rectangle R = new Rectangle(10,10,100,100);
g.DrawRectangle(Pens.Black, R);
g.DrawEllipse(Pens.Black, R);
g.DrawRectangle(Pens.Black, R);
g.DrawEllipse(Pens.Black, R);
...
...
}
The rather stunning results, are as follows... (lesser is better)
Java SE 6 : 63 ms
.NET 3.5 : 227 ms
So as you can see, the time needed by .NET 3.5 to draw the same set of figures (2000 in total) is a staggering 300% more than that required by the JVM! Java uses OpenGL for all it's drawings, while .NET uses GDI+. This significant difference can indeed be credited to the difference between OpenGL and GDI+. Most people prefer OpenGL to GDI, primarily because it is known to be faster (especially in game development) and performing actions such as rotating and scaling of drawings is simpler. However, GDI too has it's advantages; the most notable of them being that it is guaranteed 100% to run on any default windows installation, because GL is dependent on, and requires, that the latest graphics drivers be installed.
TEST 4
ADDING LOTS OF CONTROLS TO A WINDOW
-------------------------------------------------------
In this test, we tried adding 1000 controls to a window programatically; that is, we're not using Visual Studio or NetBeans to "drag-and-drop" the controls ; we're adding them at runtime. Considering the fact that the .NET framework was created primarily for Windows, the results of this test may be disturbing to some.
The programs in both languages are very similar, in that they both just add() controls to the window / applet. (I'll make the programs available anyway, later)
So we're heading off to the results directly...
In this test, Java is marginally ahead of .NET .
So apparently, as far as graphics and visual stuff are involved, Java is a few steps ahead of .NET. This isn't enough to say "Graphics are better in Java than .NET" , because at the moment, Microsoft endeavours such as WPF are far far ahead of Java when it comes to rich and intutive user interfaces.
So those were the two test results for today, more tests are lined up, so don't loose hope! For now, it's a tie between .NET and Java, and the coffee cup may not be getting cold after all.
for(double x = 0; x <= 100000 ; x++){System.out.println(x);}
for(double x = 0; x <= 100000 ; x++){Console.WriteLine(x);}
int i,j,temp,a[]=new int[1000];long start,end,time;for (i=999; i>=0; i--)a[i]=1000-i;for (i=0; i<1000;>for (j=0; j<999-i;>if (a[j]>a[j+1]){temp=a[j];a[j]=a[j+1];a[j+1]=temp;}
int i,j,temp;int [] a=new int[1000];long start,end,time;for (i=999; i>=0; i--)a[i]=1000-i;for (i=0; i<1000;>for (j=0; j<999-i;>if (a[j]>a[j+1]){temp=a[j];a[j]=a[j+1];a[j+1]=temp;}
Well, it could probably not get any more auspicious than to start with the highly acclaimed "Hello World" statement.
As the "About Me" box rightly states, my name is Pritin Tyagaraj, and I'm doing my undergraduate course in Computer Science at SRM University, India (which, to my utter surprise, is India's best private college, according to the Times Of India ; but that's an entirely different issue).
I have always been fond of computers and programming, but have taken to it seriously only recently. At first I tried my hands at Java. Something seemed wrong. Not that Java is in any way less equipped or worse than any .NET language; it's just that I wasn't having fun with it. At times I've blamed it's recommended variable name notations, and at others, it's IDE ; neither of which make any sense at all.
Desparately looking for alternatives, I came wandering over to Microsoft's .NET . I had found my home! (Or is it my to-be office?) Something seemed right about .NET. I've always enjoyed programming for fun with .NET. It was boring in C, and something I didn't have time to try in Java, but .NET had it's way for me.
I don't always prefer studying directly from any book. I'd rather play around with a new language by myself, before even getting started at learning it formally. At first I played a lot with VB.NET, because I had heard it was easy. But then misled by popular incorrect biased opinions against VB as a choice of language, I migrated to C#.
It has not even been a year since I first got to .NET, and I haven't really had any formal training, so I obviously don't know much. I intend to use this blog to share with you what I learn ; as I learn.
So sit tight and get ready to Live and Love .NET !