Networks are everywhere. The staggering complexity and seemingly chaotic nature of everyday life is actually a collection of different networks interacting with us from the moment that we wake up to the time we go to sleep. We are constantly surrounded by the social network, the financial network, the transport network, the telecommunications network, and even the network within our own bodies. The understanding of how these systems operate and interact with one another has been the realm of physicists, economists, biologists and mathematicians. Until recently, the study of networks lacked empirical application because it was extremely difficult to gather reliable data about large and complex systems. But in recent years, the Internet has given researchers the opportunity to study and test the mathematical descriptions of the vast complex systems. The growth rate and structure of Cyberspace allows researchers to map and test several previously unproven ideas about how links and hubs within the network interact with one another. With the Web, we now have means to test the organisational structures of networks, their architecture, their growth, and even allow some limited predictions about their behaviour, strengths and vulnerabilities. This paper explores the possible implication of these theories to copyright law. The study of the architecture of networks has opened new avenues of research about the way in which scale-free topologies present in the Web may provide new strategies for copyright enforcement. Similarly, a better understanding of how websites link to one another could provide better tools to allocate liability and to distribute royalties in a more efficient manner. The paper asks the following questions. How should we regulate networks if we can find certain deterministic characteristics to them? Is enforcement of infringing behaviour easier to regulate because we understand the technology better? (Note: the downloadable article is a draft version of the published work)