Tuesday, January 27, 2009

Understanding the Windows Experience Index

WEI scores currently range from 1 to 5.9. Your computer is rated with an overall score, called the base score, and with subscores for each of five individual hardware components: processor, memory, graphics, gaming graphics, and primary hard disk. The base score is determined from the lowest of the five subscores, because your computer's performance is limited by its slowest or least-powerful hardware component.

The base score and subscores express the level of performance you can expect not only from Windows Vista itself, but from the programs that you run on it. That said, a base score of 1.0 doesn't mean that you have a bad computer or that you shouldn't use Windows Vista. It means that Windows Vista will run with basic functionality and that common productivity programs, such as those in the Microsoft Office system, will perform acceptably. A higher score represents a computer that's capable of higher performance and of running programs that demand more system resources.

As newer, faster hardware becomes available, Microsoft will increase the top end of the rating scale to allow scores of 6.0 and higher. That means the score you see today will have the same meaning at any point in your computer's lifetime. For example, even if the top end of the WEI range increases to 8.0, my computer's base score will remain at 2.2 if I don't make any hardware changes.

Windows Experience Index

Understand and improve your computer's performance in Windows Vista

By Stephanie Krieger

When presenting at a user group meeting last week, someone asked me if a low Windows Experience Index (WEI) score of 1 or 2 means that he should not use Windows Vista on that computer. The answer is "not at all."

One of my favorite capabilities of Windows Vista is that it scales itself to fit your computer to help give you the best possible performance. For example, if your computer doesn't have the graphics capability to effectively display the new Windows Aero visual effects, Windows Vista won't enable those effects on your computer.

In this article, I'll show you where to find your computer's WEI base score and subscores, as well as how to interpret them. I'll also show you how you can use these scores to help know what to look for when buying a new computer, upgrading your existing computer, or troubleshooting performance issues.

Editing Talk:Computer hardware

Wikimedia Commons does not yet have a Talk page called Computer hardware.
  • To start the page, begin typing in the box below. When you're done, press the "Save page" button. Your changes should be visible immediately.
  • If you have created this page in the past few minutes and it has not yet appeared, it may not be visible due to a delay in updating the database. Please wait and check again later before attempting to recreate the page.
  • If this is a gallery page (as opposed to e.g. a discussion page):
  • If this page used to exist, it may have been deleted. Check for Computer hardware in the deletion log and/or in deletion requests.

What is Wikimedia Commons?

Wikimedia Commons is a media file repository making available public domain and freely-licensed educational media content (images, sound and video clips) to all. It acts as a common repository for the various projects of the Wikimedia Foundation, but you do not need to belong to one of those projects to use media hosted here. The repository is created and maintained not by paid-for artists but by volunteers. The scope of Commons is set out on the project scope pages.

Wikimedia Commons uses the same wiki-technology as Wikipedia and everyone can edit it. Unlike media files uploaded to other projects, files uploaded to Wikimedia Commons can be embedded on pages of all Wikimedia projects without the need to separately upload them there.

Launched on 7 September 2004, Wikimedia Commons hit the 1,000,000 uploaded media file milestone on 30 November 2006 and currently contains 3,839,917 files and 86,946 media collections. More background information about the Wikimedia Commons project itself can be found in the General disclaimer, at the Wikipedia page about Wikimedia Commons and its page in Meta-wiki.

Unlike traditional media repositories, Wikimedia Commons is free. Everyone is allowed to copy, use and modify any files here freely as long as the source and the authors are credited and as long as users release their copies/improvements under the same freedom to others. The Wikimedia Commons database itself and the texts in it are licensed under the GNU Free Documentation License. The license conditions of each individual media file can be found on their description pages. More information on re-use can be found at Commons:Reusing content outside Wikimedia and Commons:First steps/Reuse.

Save on Summer Cooling Costs with a Programmable Thermostat

EPA is launching an effort to help Americans save on their summer cooling bills with advice on how to properly program their thermostat. When used correctly, ENERGY STAR qualified programmable thermostats can save money on energy bills and help fight global warming by reducing greenhouse gas emissions. If consumers manage their heating and cooling schedules accordingly, a programmable thermostat can save about $180 a year on home energy bills.

Monitors

ENERGY STAR qualified computer monitors use from 25–60% less electricity than standard models, depending on how they are used.

monitor

Earning the ENERGY STAR

  • Computer monitors must meet stringent requirements in On, Sleep, and Off Modes in order to earn the ENERGY STAR.
    • In On Mode, the maximum allowed power varies based on the computer monitor’s resolution.
    • In Sleep Mode, computer monitor models must consume 2 watts or less.
    • In Off Mode, computer monitor models must consume 1 watt or less.
  • Enabling your monitor's power management features and turning it off at night not only saves energy, but also helps computer monitor equipment run cooler and last longer.
  • Businesses that use ENERGY STAR enabled office equipment may realize additional savings on air conditioning and maintenance.
  • Over its lifetime, equipment in a single home office (e.g., computer, monitor, printer, and fax) that meet the new ENERGY STAR specifications will save more than $115 over the life of the products, and even more if you don't already have ENERGY STAR qualified equipment.

Federal IT managers and procurement staff should visit Product Purchasing and Computer Power Management for Federal Agencies to learn about saving energy by purchasing ENERGY STAR and EPEAT-registered office equipment and complying with Executive Order 13423.

Sample Language: Explain relationship with ENERGY STAR

  • RETAILER/MANUFACTURER PARTNERS: [Company name] is proud to offer our customers products with the ENERGY STAR label.

  • UTILITY/STATE PARTNERS: [Organization name] is a proud partner of ENERGY STAR. OR [organization name] proudly promotes ENERGY STAR.

  • COMMERCIAL AND INDUSTRIAL PARTNERS: [Company name] is committed to continually improving our management of energy resources, which reduces both operating costs and related forms of pollution. We are proud to be part of the family of businesses who have also joined with ENERGY STAR.

  • SERVICE AND PRODUCT PROVIDERS (SPPs): [Company name] believes businesses benefit financially by continually improving their management of energy resources, and the environment benefits from reduced levels of related pollution. We are proud to offer services and products that may assist businesses who have committed to the goals of ENERGY STAR.

  • HOME BUILDER PARTNERS: [Builder name] is a proud builder of ENERGY STAR labeled homes.

  • FOR RATERS, LENDERS: [Organization name] is a proud partner of ENERGY STAR. OR [organization name] proudly promotes ENERGY STAR.

  • FOR INSULATION MANUFACTURER PARTNERS: [Organization name] is proud to offer insulation products that help meet the performance goals of the "Seal and Insulate with ENERGY STAR" effort.

  • HOME PERFORMANCE PARTNERS: [Organization name] is proud to partner with ENERGY STAR and sponsor Home Performance with ENERGY STAR in [geographic region].

ENERGY STAR Web Linking Policy

ENERGY STAR will provide a hyperlink to a partner's Web site only if the following guidelines are met. Partners who choose not to comply with the guidelines will remain listed on the ENERGY STAR Web site.

Web Linking Policy: Requirements

  1. The ENERGY STAR name and logo (both used in compliance with the ENERGY STAR Identity Guidelines) along with a brief introduction of ENERGY STAR (see sample language below), are posted on the partner's Web site. The page with this information will be used for the hyperlink from the ENERGY STAR Web site.

  2. Text is provided that explains to consumers the relationship of the partner to ENERGY STAR (see sample language below).

  3. Information is provided that identifies any service or ENERGY STAR labeled product(s) the partner provides. If a partner manufactures or sells products with the ENERGY STAR label, the partner should clarify which model numbers are ENERGY STAR qualified to avoid in-store confusion. And, if the partner promotes or sells qualified products and does not manufacture them, then the partner should include the manufacturer of the product in addition to the model number.

  4. Provide a hyperlink to www.energystar.gov, or mention that additional information is available at the ENERGY STAR Web site and list the web address: www.energystar.gov.

  5. After meeting the above requirements, please e-mail hotline@energystar.gov requesting a review of your Web page for compliance with the Web Linking Policy. Be sure to provide the url for your Web page that meets the guidelines.

Office Equipment

If every home office product purchased in the U.S. this year were ENERGY STAR qualified, Americans would save $200 million in annual energy costs while preventing almost 3 billion pounds of greenhouse gases – equivalent to the emissions of 250,000 cars.

Office Equipment that has earned the ENERGY STAR helps save energy through special energy-efficient designs, which allow them to use less energy to perform regular tasks, and automatically enter a low-power mode when not in use.

Most office equipment is left on for 24 hours a day, making energy-efficient design and power management features important for saving energy and reducing greenhouse gas emissions that contribute to global warming. In addition to reducing power use for the products themselves, ENERGY STAR qualified office products feature energy-efficient designs for accessories. So, products sold with an external power adapter, cordless handset, or digital front-end, must have accessories which meet the ENERGY STAR specifications for External Power Supplies (EPS), Telephony, or Computers. These requirements ensure that the ENERGY STAR is represented only on the market's most energy-efficient products.

Earning the Government's

Since computers are in use more hours per day than they used to be, power management is important to saving energy. ENERGY STAR power management features place computers (CPU, hard drive, etc.) into a low-power “sleep mode” after a designated period of inactivity. Low-power modes for computers reduce the spinning of the hard disk, which decreases power consumption. Simply hitting a key on the keyboard or moving the mouse awakens the computer in a matter of seconds.

Federal IT managers and procurement staff should visit Product Purchasing and Computer Power Management for Federal Agencies to learn about saving energy by purchasing ENERGY STAR and EPEAT-registered office equipment and complying with Executive Order 13423.

Price Watch - street price search engine

Offers a way to find prices on computer products (systems, CPU, memory, storage, networking, multimedia, etc.) from many manufacturers. Prices are entered by the manufacturer using a proprietary Price Watch Info-Link system. Users then see a date and time posting with each product chosen.

computer

computer
Get a Deeper Technical View of Intel vPro Technology
With todays need for increased security and for establishing well-managed environments, the cost of managing PCs has become a significant percentage of the total cost of ownership (TCO) of technology. A critical capability that would help IT do more with the resources they have is the ability to protect and remotely manage both notebook and desktop PCs, regardless of wired or wireless state, or the state of the OS. Click here.
Intel vPro and Centrino Pro Processor Technology Quick Start Guide
Intel Active Management Technology provides various configuration options for customers to use when deploying Intel vPro and Intel Centrino Pro processor technology-enabled systems into their environment. Get a step-by-step approach of what needs to be done to successfully deploy Intel AMT systems. Click here.
The Pro Platform: Intel vPro Technology Podcast
Intel's answer to business users who want to be able to keep track of who's on the network, where and the security risks they pose is the growing Pro platform. Click here.
A programmable machine. The two principal characteristics of a computer are:
  • It responds to a specific set of instructions in a well-defined manner.
  • Modern computers are electronic and digital. The actual machinery -- wires, transistors, and circuits -- is called hardware; the instructions and data are called software.

    All general-purpose computers require the following hardware components:

  • memory : Enables a computer to store, at least temporarily, data and programs.
  • mass storage device : Allows a computer to permanently retain large amounts of data. Common mass storage devices include disk drives and tape drives.
  • input device : Usually a keyboard and mouse, the input device is the conduit through which data and instructions enter a computer.
  • output device : A display screen, printer, or other device that lets you see what the computer has accomplished.
  • central processing unit (CPU): The heart of the computer, this is the component that actually executes instructio
  • Monday, January 26, 2009

    In the January issue of Computer

    The continued exponential growth of computational power and data-generation sources is giving rise to a new era in information processing: data-intensive computing. In this annual Outlook issue, we examine the challenges posed by this changing paradigm, featuring articles on grid-computing infrastructures; advances in biochemical computation; and sensor networks monitoring environments, systems, and complex interactions in a range of applications such as healthcare, fitness, and entertainment. We also look at the 24-hour knowledge factory for software development and revisit the professional and ethical dilemmas in software engineering.

    computrer servise

    Sometimes small size means small performance—but not with the Gateway UC Series Notebooks, which are a simple joy to use. Unlike some portables that cramp your productivity, Gateway's UC Series laptops come with lots of memory (up to 4GB of DDR2 RAM), giving you ample power for multitasking and multimedia. And our UC Series boasts large hard drives (up to 320GB2), so you can keep your documents, photos, music and movies with you all the time, not back home on a desktop or external. Don't leave half your life behind!

    computers

    Sure, the Gateway UC Series is a head turner. When you're that beautiful on the outside, you get that kind of attention. But the UC's beauty is more than case deep. It starts with the brilliant Ultrabright™ high-definition display, providing a panoramic screen experience whose vivid media playback will make your videos and movies all the more enjoyable. Powering those great 1280 x 800 visuals is an Intel® graphics solution with full support for the gorgeous Aero™ interface of Windows Vista®. Intel's graphics media accelerator speeds performance yet also saves power, and you get more than 1GB of dynamic video memory. So your UC Series notebook won't turn just other people's heads; it'll turn yours, too.

    Computing

    RAM (Random Access Memory)

    Computing is usually defined as the activity of using and developing computer technology, computer hardware and software. It is the computer-specific part of information technology. Computer science (or computing science) is the study and the science of the theoretical foundations of information and computation and their implementation and application in computer systems.

    Computing Curricula 2005[1] defined computing:

    In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; processing, structuring, and managing various kinds of information; doing scientific studies using computers; making computer systems behave intelligently; creating and using communications and entertainment media; finding and gathering information relevant to any particular purpose, and so on. The list is virtually endless, and the possibilities are vast.

    Networking and the Internet

    Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.

    In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

    Multiprocessing

    Some computers may divide their work between one or more separate CPUs, creating a multiprocessing configuration. Traditionally, this technique was utilized only in large and powerful computers such as supercomputers, mainframe computers and servers. However, multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers have become widely available and are beginning to see increased usage in lower-end markets as a result.

    Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[19] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.

    Memory

    A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is up to the software to give significance to what the memory sees as nothing but a series of numbers.

    In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.

    The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. Since data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

    Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC , the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.[18]

    In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

    Arithmetic/logic unit (ALU)

    The ALU is capable of performing two classes of operations: arithmetic and logic.

    The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?").

    Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic.

    Superscalar computers contain multiple ALUs so that they can process several instructions at the same time. Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices.

    Example

    A traffic light showing red.

    Suppose a computer is being employed to drive a traffic light. A simple stored program might say:

    1. Turn off all of the lights
    2. Turn on the red light
    3. Wait for sixty seconds
    4. Turn off the red light
    5. Turn on the green light
    6. Wait for sixty seconds
    7. Turn off the green light
    8. Turn on the yellow light
    9. Wait for two seconds
    10. Turn off the yellow light
    11. Jump to instruction number (2)

    With this set of instructions, the computer would cycle the light continually through red, green, yellow and back to red again until told to stop running the program.

    However, suppose there is a simple on/off switch connected to the computer that is intended to be used to make the light flash red while some maintenance operation is being performed. The program might then instruct the computer to:

    1. Turn off all of the lights
    2. Turn on the red light
    3. Wait for sixty seconds
    4. Turn off the red light
    5. Turn on the green light
    6. Wait for sixty seconds
    7. Turn off the green light
    8. Turn on the yellow light
    9. Wait for two seconds
    10. Turn off the yellow light
    11. If the maintenance switch is NOT turned on then jump to instruction number 2
    12. Turn on the red light
    13. Wait for one second
    14. Turn off the red light
    15. Wait for one second
    16. Jump to instruction number 11

    In this manner, the computer is either running the instructions from number (2) to (11) over and over or its running the instructions from (11) down to (16) over and over, depending on the position of the switch.[15]

    Programs

    In practical terms, a computer program may run from just a few instructions to many millions of instructions, as in a program for a word processor or a web browser. A typical modern computer can execute billions of instructions per second (gigahertz or GHz) and rarely make a mistake over many years of operation. Large computer programs comprising several million instructions may take teams of programmers years to write, thus the probability of the entire program having been written without error is highly unlikely.

    Errors in computer programs are called "bugs". Bugs may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases they may cause the program to "hang" - become unresponsive to input such as mouse clicks or keystrokes, or to completely fail or "crash". Otherwise benign bugs may sometimes may be harnessed for malicious intent by an unscrupulous user writing an "exploit" - code designed to take advantage of a bug and disrupt a program's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[11]

    In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from—each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer just as if they were numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

    While it is possible to write computer programs as long lists of numbers (machine language) and this technique was used with many early computers,[12] it is extremely tedious to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember—a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[13]

    Though considerably easier than in machine language, writing long programs in assembly language is often difficult and error prone. Therefore, most complicated programs are written in more abstract high-level programming languages that are able to express the needs of the computer programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[14] Since high level languages are more abstract than assembly language, it is possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

    The task of developing large software systems is an immense intellectual effort. Producing software with an acceptably high reliability on a predictable schedule and budget has proved historically to be a great challenge; the academic and professional discipline of software engineering concentrates specifically on this problem.

    Stored program architecture

    The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that a list of instructions (the program) can be given to the computer and it will store them and carry them out at some time in the future.

    In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.

    Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

    Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time—with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example:

    Computer Science and Informatics Faculty lead new Pervasive Technology Centers:

    Indiana University President Michael McRobbie announced on Tuesday November 18th the establishment of the new Pervasive Technology Institute, part of the IU Bloomington Incubator facility. The Institute, funded with $15 million from the Lilly Endowment, will consist of three centers, the Data to Insight Center, to be headed by Prof. Beth Plale, the Digital Science Center, to be headed by Prof. Geoffrey Fox, and the Center for Applied Cybersecurity Research, to be headed by Law School Prof. Fred Cate. Craig Stewart, associate dean for research technologies in the Office of the Vice President for IT, will serve as executive director for the Institute. Read More...

    Relationship with other fields

    Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term Computing Science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACMturingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[19] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[20] The term computics has also been suggested.[21] Informatik was a term used in Europe with more frequency.

    The renowned computer scientist Edsger Dijkstra stated, "Computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research has also often crossed into other disciplines, such as cognitive science, economics, mathematics, physics (see quantum computing), and linguistics.

    Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[4] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.

    The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[22]

    Computer science education

    Some universities teach computer science as a theoretical study of computation and algorithmic reasoning. These programs often feature the theory of computation, analysis of algorithms, formal methods, concurrency theory, databases, computer graphics and systems analysis, among others. They typically also teach computer programming, but treat it as a vessel for the support of other fields of computer science rather than a central focus of high-level study.

    Other colleges and universities, as well as secondary schools and vocational programs that teach computer science, emphasize the practice of advanced programming rather than the theory of algorithms and computation in their computer science curricula. Such curricula tend to focus on those skills that are important to workers entering the software industry. The practical aspects of computer programming are often referred to as software engineering. However, there is a lot of disagreement over the meaning of the term, and whether or not it is the same thing as programming.

    Monday, January 12, 2009

    The first digital computer

    Short for Atanasoff-Berry Computer, the ABC started being developed by Professor John Vincent Atanasoff and graduate student Cliff Berry in 1937 and continued to be developed until 1942 at the Iowa State College (now Iowa State University). On October 19, 1973, US Federal Judge Earl R. Larson signed his decision that the ENIAC patent by Eckert and Mauchly was invalid and named Atanasoff the inventor of the electronic digital computer.

    See our ABC dictionary definition for additional information about this computer.

    The ENIAC was invented by J. Presper Eckert and John Mauchly at the University of Pennsylvania and began construction in 1943 and was not completed until 1946. It occupied about 1,800 square feet and used about 18,000 vacuum tubes, weighing almost 50 tons. Although the Judge ruled that the ABC computer was the first digital computer many still consider the ENIAC to be the first digital computer.

    See our ENIAC dictionary definition for additional information about this computer.

    Because of the Judge ruling and because the case was never appealed like most we consider the ABC to be the first digital computer. However, because the ABC was never fully functional we consider the first functional digital computer to be the ENIAC.

    When was the first computer invented?

    Question:

    When was the first computer invented?

    Answer:

    Unfortunately this question has no easy answer because of all the different types of classifications and types of computers. Therefore this document has been created with a listing of each of the first computers starting with the first programmable computer leading up to the computers of today. Keep in mind that early inventions such as the abacus, calculators, tablet machines, difference machine, etc. are not accounted for in this document.

    How viruses may affect files

    Viruses can affect any files; however, usually attack .com, .exe, .sys, .bin, .pif or any data files - Viruses have the capability of infecting any file; however, will generally infect executable files or data files, such as word or excel documents that are opened frequently and allow the virus to try infecting other files more often.

    Increase the files size - When infecting files, virtues will generally increase the size of the file; however, with more sophisticated viruses these changes can be hidden.

    It can delete files as the file is run - Because most files are loaded into memory, once the program is in memory the virus can delete the file used to execute the virus.

    It can corrupt files randomly - Some destructive viruses are not designed to destroy random data but instead randomly delete or corrupt files.

    It can cause write protect errors when executing .exe files from a write protected disk - Viruses may need to write themselves to files that are executed; because of this, if a diskette is write protected, you may receive a write protection error.

    It can convert .exe files to .com files - Viruses may use a separate file to run the program and rename the original file to another extension so the exe is run before the com.

    It can reboot the computer when executed - Numerous computer viruses have been designed to cause a computer to reboot, freeze, or perform other tasks not normally exhibited by the computer.

    Virus properties

    Below is a listing of some of the different properties a computer virus is capable of having and what the particular property is capable of doing. Keep in mind that not all viruses will have every one of these abilities.

    Your computer can be infected even if files are just copied. Because some viruses are memory resident, as soon as a diskette or program is loaded into memory, the virus then attaches itself into memory and then is capable of infecting any file on the computer you have access to.

    Can be Polymorphic. Some viruses have the capability of modifying their code, which means one virus could have various amounts of similar variants. This is also true with e-mail viruses that change the subject or body of the message to help from being detected.

    Can be memory or non-memory resident. As mentioned earlier a virus is capable of being either memory resident where the virus first loads into memory and then infects a computer or non-memory resident where the virus code is only executed each time a file is opened.

    Can be a stealth virus. Stealth viruses will first attach itself to files on the computer and then attack the computer; this causes the virus to spread more rapidly.

    Viruses can carry other viruses. Because viruses are only software programs a virus may also carry other viruses making the virus more lethal and help the primary virus hide or assist the primary virus with infecting a particular section of the computer.

    Can make the system never show outward signs. Some viruses can hide changes made, such as when a file was last modified making the virus more difficult to detect.

    Can stay on the computer even if the computer is formatted. Some Viruses have the capability of infecting different portions of the computer such as the CMOS battery or master boot record. Finally, if a computer is completely erased and the virus is on a backup disk it can easily re-infect the computer.

    Virus ABCs

    One of the biggest fears among new computer users is being infected by a computer virus or programs designed to destroy their personal data. Viruses are malicious software programs that have been designed by other computer users to cause destruction and havoc on a computer and spread themselves to other computers where they can repeat the process.

    Once the virus is made, it is often distributed through shareware, pirated software, e-mail, P2P programs, or other programs where users share data.

    What is software?

    What is software?

    Software is a collection of instructions that enables a user to interact with the computer or have the computer perform specific tasks for them. Without any type of software the computer would be useless. Below is a listing of different types of software a user may have on his or her computer. For a computer to be functional, most computers will include a software operating system and a collection of different software programs. Below is a listing of different software programs that may be included on a computer.

    What is hardware?

    What is hardware?

    Hardware is best described as a device that is physically connected to your computer or something that can be physically touched. A CD-ROM, monitor, and printer are all examples of computer hardware.

    Earning the Government's ENERGY STAR

    Desktop and notebook (laptop) computers, game consoles, integrated computer systems, desktop-derived servers and workstations are all eligible to earn the ENERGY STAR. Those that come with the label are more efficient than ever. When purchasing a new computer, be sure to look for the ENERGY STAR before making your final decision. You should be able to find the label on the products and packaging as well as in product literature and on websites to make it easy for you to choose.

    EPA has strengthened the requirements for earning the ENERGY STAR to meet energy use guidelines in three distinct operating modes: standby, active, and sleep modes. This ensures energy savings when computers are being used and performing a range of tasks, as well as when they are in standby. ENERGY STAR qualified computers must also have a more efficient internal power supply.

    Internet History

    This Internet Timeline begins in 1962, before the word ‘Internet’ is invented. The world’s 10,000 computers are primitive, although they cost hundreds of thousands of dollars. They have only a few thousand words of magnetic core memory, and programming them is far from easy.

    Domestically, data communication over the phone lines is an AT&T monopoly. The ‘Picturephone’ of 1939, shown again at the New York World’s Fair in 1964, is still AT&T’s answer to the future of worldwide communications.

    But the four-year old Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense, a future-oriented funder of ‘high-risk, high-gain’ research, lays the groundwork for what becomes the ARPANET and, much later, the Internet.

    By 1992, when this timeline ends,

    • the Internet has one million hosts
    • the ARPANET has ceased to exist
    • computers are nine orders of magnitude faster
    • network bandwidth is twenty million times greater.

    Control unit

    The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer.[16] Control systems in advanced computers may change the order of some instructions so as to improve performance.

    A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[17]

    Diagram showing how a particular MIPS architecture instruction would be decoded by the control system.

    The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

    1. Read the code for the next instruction from the cell indicated by the program counter.
    2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
    3. Increment the program counter so it points to the next instruction.
    4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
    5. Provide the necessary data to an ALU or register.
    6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
    7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
    8. Jump back to step (1).

    Arithmetic/logic unit (ALU)

    The ALU is capable of performing two classes of operations: arithmetic and logic.

    The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?").

    Memory

    A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is up to the software to give significance to what the memory sees as nothing but a series of numbers.

    Input/output (I/O)

    I/O is the means by which a computer receives information from the outside world and sends results back. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.

    Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics[citation needed]. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

    Multitasking

    While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.

    Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.

    Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly - in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

    Multiprocessing

    Some computers may divide their work between one or more separate CPUs, creating a multiprocessing configuration. Traditionally, this technique was utilized only in large and powerful computers such as supercomputers, mainframe computers and servers. However, multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers have become widely available and are beginning to see increased usage in lower-end markets as a result.

    Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[19] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.

    Networking and the Internet

    Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.

    In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

    Stored program architecture

    The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that a list of instructions (the program) can be given to the computer and it will store them and carry them out at some time in the future.

    In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.

    Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

    Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time—with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions.

    History of computing

    It is difficult to identify any one device as the earliest computer, partly because the term "computer" has been subject to varying interpretations over time. Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device.

    The history of the modern computer begins with two separate technologies - that of automated calculation and that of programmability.

    Examples of early mechanical calculating devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150-100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[3] This is the essence of programmability.

    The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer.[4] It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour,[5][6] and five robotic musicians who play music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed every day in order to account for the changing lengths of day and night throughout the year.[4]

    The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers. However, none of those devices fit the modern definition of a computer because they could not be programmed.

    In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

    It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine".[7] Due to limited finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.

    Large-scale automated data processing of punched cards was performed for the U.S. Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation, which later became IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

    During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.