White Hat Web Design

You may remember from your primary school science class that nature can be characterized as an ecosystem, a complex system of interrelationships between living and nonliving elements of the environment. Web development can also be understood as an ecosystem, one that builds on existing technologies (URL, DNS, and Internet), and contributes new protocols and standards (HTTP, HTML, and JavaScript) that facilitate client-server interactions. As this ecosystem matures, new client and server technologies, frameworks, and platforms continue to be developed in support of the web (PHP, jQuery, Bootstrap, etc.).

The rich web development ecosystem has created entirely new areas of interest for both research and businesses including search engines, social networks, ecommerce, content management systems, and more. Just as you don't need to know everything about worms, trees, birds, amphibians, and dirt to be a biologist, you don't necessarily need to understand every concept in the ecosystem in complete depth in order to be successful as a web developer. Nonetheless, it is important to see how this complicated network of concepts and technologies defines the scope of modern web development.

Web development can be visualized as a three-story building with some unusual things going on inside. What we tried to capture here is the idea that one can understand web development as an activity with three broad levels. At the basement level are the foundational components, necessary to make it all work, but operating more or less out of sight. The main-floor level includes the topics usually understood to constitute web development: HTML, CSS, JavaScript, and some type of server-side programming language, such as PHP. Finally, on the upper level reside the most advanced topics, be they search algorithms, security threats, or advanced programming design. The web is sometimes referred to as a client- server model of communications.

Latest Blog Posts

In the client-server model, there are two types of actors: clients and servers. The server is a computer agent that is normally active 24/7, listening for requests from clients. A client is a computer agent that makes requests and receives responses from the server, in the form of response codes, images, text files, and other data. Client machines are the desktops, laptops, smart phones, and tablets you see everywhere in daily life. These machines have a broad range of specifications regarding operating system, processing speed, screen size, available memory, and storage. The essential characteristic of a client is that it can make requests to particular servers for particular resources using URLs and then wait for the response. These requests are processed in some way by the server. The server is the central repository, the command center, and the central hub of the client-server model. It hosts web applications, stores user and program data, and performs security authorization tasks. Since one server may serve many thousands, or millions of client requests, the demands on servers can be high. A site that stores image or video data, for example, will require many terabytes of storage to accommodate the demands of users. A site with many scripts calculating values on the fly, for instance, will require more CPU and RAM to process those requests in a reasonable amount of time.