Saturday 24 November 2007

Memories are cheap

With the current development in computer technology the industry will be soon able to produce huge memories (hundreds of terabytes) on non-volatile memory (one that is not erased when the power is switched off). Discuss how this will affect the architecture of the computer systems, any changes in the data processing and storage techniques, and ultimately the user.

The ability of the computer industry to produce huge volumes of inexpensive non - volatile memory is affecting the architecture of computer systems by taking the responsibility of data storage away from the client PC or device. Compainies across the world are implementing Storage Area Network (SAN) technology which allow remote storage devices to be seen by servers as being locally attached. Employee portals such as Microsoft Office Sharepoint Server 2007 allow employees to work on documents and save them online to the SAN; the need to provide staff with large amounts of local storage space is reduced. The existence of a SAN improves the disaster recovery process and legislation such as Sarbanes - Oxley has actaully forced companies to implement SAN technology to meet legal requirements.Looking beyond commercial enterprises to the gobal use of computers "the ready availability of high capacity, low cost storage systems has fueled the application of both SAN and NAS (Network Area Storage) architectures and made the Internet a high growth area with almost universal acceptance"(Grochowski & Halem, 2003).

Put simply, the concept of the desktop application is becoming increasingly redundant. People are increasingly using applications delivered through a web browser and allowing the providers of the applications to store all related data. Social applications such as Facebook, Flickr and Second Life make no demands on the user to save anything locally other than login credentials. Google Mail offers anyone 1GB of free email storage and allows us to access our mail from anywhere in the world that we can get online. The video games industry will soon stop making its users buy software from a retailer; relying instead on delivery via the internet. Valve Corporation uses its Steam service to let users download games, play them online and provide feedback. Popular client software such as Microsoft Office is being challenged by Google Docs which provides word processor and spreadsheet functionality online. No more need to make a Windows version of an application and a Linux version and a Mac version. Just make a web version. Maybe even leviathan operating systems such as Windows Vista and Linux will become redundant if all the user needs is a simple web browser to access and manipulate their data.All of these advances require vast amounts of storage capacity and is why, for example, "Microsoft is building a mammoth data center on a former bean field in the farming town of Quincy, Washington" (Carr, 2007).

References:

Grochowski, E. & Halem, R. D. (2003) "Technological impact of magnetic hard disk drives on storage systems" IBM Systems Journal, 42 (2), pp.338 - 346, International Business Machines Corporation [Online]
Available from http://www.research.ibm.com/journal/sj/422/grochowski.pdf (Accessed 24th November 2007)

Carr, N (2007) "Software companies are building their way to a very material future" [Online] London: Guardian News and Media Ltd. 2007
Available from http://www.guardian.co.uk/technology/2007/jun/28/comment.guardianweeklytechnologysection
(Accessed 24th November 2007)

RISC vs CISC

Compare and contrast CISC architecture and RISC architecture. Make sure to include the strengths and weaknesses of each as well as applications to which they would be most suited. You may also compare/contrast them with any architecture that may be considered as future replacements for either or both of these two.

The aim of CISC architecture is to do more with fewer instructions, and so the instructions passed to a CPU within CISC are much more complicated. Because of this CPU - oriented approach, the software compiler that essentially feeds the instructions has to do less work. Fewer, more complicated instructions need a smaller amount of storage space and therefore the computer needs less RAM. CISC architecture was therefore ideal in a time when the price of RAM was much more expensive when compared to today's prices, and when the software working with the architecture was inferior.

RISC architecture takes the opposite approach to CISC; it is designed to receive many more simple instructions. RISC allows a number of performance - enhancing techniques to be applied. One such is pipelining - "the technique of allowing the steps in the machine cycle to overlap" (Brookshear, 2007: 124). Another is caching, where because less CPU 'real estate' is required to process the much simpler instruction set, more general purpose registers can be made available which the CPU can use to store a copy of the main memory at any given time and save the time normally taken to communicate with main memory.

Both architectures have drawbacks. CISC allows for much less flexibility in software because the more complicated instruction set that it processes are more rigid in terms of allocating memory and processing programs. Also, CISC cannot implement pipelining because the instructions are always different lengths. While RISC is now more dominant in the processor market (and is the basis behind future processor technology as I will explain below) it is only as good as the software that supports it. Compilers must do much more to simplify the instructions to a level that RISC can process, and for this reason RISC struggled to gain a foothold in the 1980s and 1990s. Software companies such as Microsoft didn't back it - "Windows 3.1 and Windows 95 were designed with CISC processors in mind" (Chen, Novick & Shimano, 2000).

In the last five years the regular increase in performance brought about by each new release of a CPU has slowed, and manufacturers such as Intel and AMD are designing future processors to take advantage of multithreading rather than trying to squeeze in more instructions per clock cycle. Probably the best known of these next generation CPU's is Intel's Itanium which is built on Explicitly Parallel Instruction Computing (EPIC) architecture. This improves in RISC by making the compiler and processor work together to measure how many of the operations in a program can be performed simultaneously, and again much more responsibility for this is passed to the compiler. More space is again allocated to cache memory and also to adding extra cores to the processor. These extra cores allow multiple threads to be executed simultanteously - a huge advantage in the field of virtualisation, for example, because one operating system can be run from each core and the physical size of server farms can be greatly reduced. "Itanium's advantages in instruction level parallelism (ILP) and relatively small cores will give it a clear performance lead over its RISC and CISC rivals as semiconductor technology advances" (Feldman, 2006).

It is clear, however, that today's design techniques are limited; "developers of artificial neural networks argue that the basic CPU - main memory design model is inefficient when compared to the human brain because most of the connectivity is destined to be idle most of the time" (Brookshear, 2007: 125).

References:

Brookshear, G, J. (2007) Computer Science: An Overview 9th ed. Boston: Pearson Education Inc.

Chen, C., Novick, G. & Shimano, K (2000) "risc vs cisc" [Online]
Available from http://cse.stanford.edu/class/sophomore-college/projects-00/risc/about/index.html (Accessed 22nd November 2007)

Feldman, M. (2006) "Itanium's Growing Pains" [Online] Santa Fe, USA: Tabor Publications and Events
Available from http://www.hpcwire.com/hpc/640152.html (Accessed: 22nd November 2007).

Sunday 18 November 2007

Predicting the future

Attempt, in 350-500 words, to predict how a person holding the same position as you are in now, would describe his/her position in ten years' time. Will the position still exist? What will be similar to today, and what will be different? What training will the position require? What elements will be automated? If your job description is complex, simply choose one aspect of it. Whatever you predict, I expect you to substantiate your claim as well as you presently can. You are allowed to “peek ahead” in our textbook, quote outside references, or whatever else you think will convince us.

I fully expect the position that I hold to exist in ten years time. I foresee that the manual process of software development will become more automated but that the wider responsibilities of the position - supporting existing database applications, providing expertise to implement new projects and helping to facilitate business change - will still be the same a decade from today.Part of my job involves developing software solutions that handle date in different formats. Whenever I need to access new data or transform data from one format to another I have to write a new query which can be time consuming. I expect that advances in programming languages such as the Microsoft LINQ Project will revolutionise this. "In short, LINQ would meld queries of multiple data stores into a common development environment, transforming the way queries are programmed into code" (Schwartz & Desmond, 2007). Instead of having to use a separate set of editing tools or a separate syntax to deal with different data sources I expect to be able to work with all data through one interface and one common set of commands, and to save myself lots of time in the process.

My organisation has begun to implement hardware and software infrastructure to communicate with ContactPoint, a national UK database that will store information on children and their interaction with the authorities. As recently as twelve months ago few people in my organisation had heard of ContactPoint; now it is having to cope with issues such as data cleansing and transferring data to the national index via an API interface. The increasing desire of central government to make data available at a national level is one of the reasons why I don't see my position - which is to build and support the infrastructure - changing too much. I envisage, for example, that the government will want to build and index to store information about people claiming income support and another raft of changes to make this happen will need to be planned, implemented and supported.Brookshear (2007: 491) states “without a doubt, advances being made in artificial intelligence have the potential of benefiting humankind” and other revolutions in the field of computer science such as quantum computing and robotics have the potential to change the world. But I don’t think that these will trickle down to local government IT in the next ten years; I feel that my position is pretty safe from drastic change.

References:Schwartz, J and Desmond, M (2007) "Looking to LINQ: Will Microsoft's Language Integrated Query transform programmatic data access?" [Online] 1105 Media Inc, CA, USAAvailable from http://reddevnews.com/features/article.aspx?editorialsid=707 (Accessed 16th Nov 2007)

Brookshear, J. G. (2007) Computer Science: An Overview 9th. Ed.Boston: PearsonSamuel Sambasivam writes:

Saturday 17 November 2007

Hardware and Software; the chicken and the egg

In the early days hardware was always developed first followed by adaptations in the software, do you believe this is still the case today? Use examples to support your conclusion.

I think that adaptations in software have always been reactive to what is available from the hardware industry. Software has either been unable to reach its potential because of the limitations of hardware, or has been overwhelmed by developments. In the first days of computers being available in the home and workplace the available processing power was so small that it prevented software from achieving its goals. A study in Ireland in 1969 showed "that main factors holding back developments among in-house commercial users were the lack of computing capacity, the unsuitability of existing hardware" (Drew & Foster, 1994). The study showed that software packages such as payroll and economic modelling were in demand but the computer hardware failed to provide the resources to realise them.

The field of quantum computing is another example of where the hardware had to come before adaptations in the software. In 1994 Peter Shor described the first quantum computer algorithm but it wasn't until 2001 that IBM demonstrated the algorithm running on a quantum computer. The algorithm couldn't be tested and modified until the necessary hardware was available.The video games industry has historically always worked on a 'hardware first, software second' basis. The revolutionary Nintendo Wii is an example of this. It was around May 2004 that Nintendo first published news of its new hardware project and not until the end of 2005 that third party software developers began to announce what titles they had planned for the new system (NintendoRevolution, 2007). And now the rapid increase of available computing power - from single to dual to quad core processors - challenges the software industry to change to come to terms with these increases. "Some have suggested that the challenges of parallelism bestowed onto the software industry will have programmers looking into the abyss" (Tulloch, 2007). This is an example of where the roadmap for the development of hardware looks very clear - to keep increasing power - but the roadmap for software to harness the power does not.

I think that the development of hardware follows a very scientific and research based path whereas the development of software is much more down to the ideas of people and their ability to identify where a type of software can make a profit or improve peoples lives. The application of advances such as bioinformatics or neural linguistic programming may have great potential but they will always be reliant on the amount of processing power that the hardware industry makes available.

References:

Drew, E. and Foster, F. G. (1994) "Information Technology in Selected Countries" [Online] The United Nations University
Available from http://www.unu.edu/unupress/unupbooks/uu19ie/uu19ie00.htm#Contents (Accessed 17th November 2007)

NintendoRevolution (2007) Nintendo Wii Timeline [Online] NintendoRevolution.ca
Available from http://www.nintendorevolution.ca/07312006/10/nintendo_wii_timeline (Accessed 17th November 2007)

Tulloch, P (2007) "Discussing the many core future" [Online] Tabor Communications and Events, San Diego, USA
Available from http://www.hpcwire.com/hpc/1332461.html (Accessed 17th November 2007)