經典原版書庫:計算機系統:集成方法

經典原版書庫:計算機系統:集成方法

《計算機系統·集成方法(英文版)》採用集成方法,系統地講解了計算機系統的軟體和硬體知識。全書分為5個模組:處理器、記憶體系統、存儲系統、並行系統和網路,分別介紹並討論了處理器及其相關的軟體問題、記憶體系統和記憶體分級體系、I/O和檔案系統、作業系統問題及支持並行編程的多處理器中相應體系結構特點、網路硬體的發展和處理各種網路行為的網路協定堆的特點等。通過《計算機系統·集成方法(英文版)》,讀者將洞悉計算機軟體和硬體之間的相互作用,加深對現代作業系統的理解。

基本介紹

  • 外文名:Computer Systems:An Integrated Approach to Architecture and Operating Systems
  • 書名:經典原版書庫:計算機系統:集成方法
  • 作者:拉姆阿堪德蘭(Umakishore Ramachandran)
  • 出版日期:2011年1月1日
  • 語種:英語
  • ISBN:9787111319559, 7111319559
  • 出版社:機械工業出版社
  • 頁數:741頁
  • 開本:32
  • 品牌:機械工業出版社
基本介紹,內容簡介,作者簡介,圖書目錄,序言,

基本介紹

內容簡介

《計算機系統·集成方法(英文版)》特色:採用啟發式教學方法,以問題驅動,激發讀者的學習興趣;逐步探索,揭開計算機系統的神秘面紗。以豐富的實例闡明相關概念及問題,加深讀者對所學知識的理解。提供了豐富的歷史背景知識。並就某些問題探討了其未來發展趨勢.便於讀者融會貫通。每章末都給出了練習題和延伸閱讀建議,幫助讀者掌握所學知識。

作者簡介

作者:(美國)拉姆阿堪德蘭(Umakishore Ramachandran) (美國)利海(William D.Leahy.Jr.)

Umakishore Ramachandran,1986年獲得威斯康星大學麥迪遜分校計算機科學專業博士學位.現在是喬治亞理工學院計算機系教授,STAR Center&Korean Programs中心主任.其主要研究興趣是體系結構設計、程式設計和並行分散式系統分析。他曾獲得NSF授予的美國總統青年科學家獎、喬治亞理工學院優秀博士論文指導獎等。
William D.Leahy,Jr.現為喬治亞理工學院計算機系講師。

圖書目錄

Preface v
Chapter 1 Introduction 1
1.1 What Is Inside a Box? 2
1.2 Levels of Abstraction in a Computer System 2
1.3 The Role of the Operating System 5
1.4 What Is Happening Inside the Box? 8
1.4.1 Launching an Application on the Computer 10
1.5 Evolution of Computer Hardware 11
1.6 Evolution of Operating Systems 13
1.7 Roadmap of the Rest of the Book 14
Exercises 14
Bibliographic Notes and Further Reading 15

Chapter 2 Processor Architecture 18
2.1 What Is Involved in Processor Design
2.2 How Do We Design an Instruction Set? 20
2.3 A Common High-Level Language Feature Set 21
2.4 Expressions and Assignment Statements 21
2.4.1 Where To Keep the Operands? 22
2.4.2 How Do We Specify a Memory Address in an Instruction? 26
2.4.3 How Wide Should Each Operand Be? 27
2.4.4 Endianness 30
2.4.5 Packing of Operands and Alignment of Word Operands 32
2.5 High-Level Data Abstractions 35
2.5.1 Structures 35
2.5.2 Arrays 35
2.6 Conditional Statements and Loops 37
2.6.1 If-Then-Else Statement 38
2.6.2 Switch Statement 40
2.6.3 Loop Statement 41
2.7 Checkpoint 42
2.8 Compiling Function Calls 42
2.8.1 State of the Caller 43
2.8.2 Remaining Chores with Procedure Calling 46
2.8.3 Software Convention 47
2.8.4 Activation Record 54
2.8.5 Recursion 54
2.8.6 Frame Pointer 55
2.9 Instruction-Set Architectural Choices 58
2.9.1 Additional Instructions 58
2.9.2 Additional Addressing Modes 58
2.9.3 Architecture Styles 59
2.9.4 Instruction Format 59
2.10 LC-2200 Instruction Set 62
2.10.1 Instruction Format 63
2.10.2 LC-2200 Register Set 65
2.11 Issues Influencing Processor Design 66
2.11.1 Instruction Set 66
2.11.2 Influence of Applications on Instruction Set Design 67
2.11.3 Other Issues Driving Processor Design 68
Summary 70
Exercises 70
Bibliographic Notes and Further Reading 74

Chapter 3 Processor Implementation 76
3.1 Architecture versus Implementation 76
3.2 What Is Involved in Processor Implementation? 77
3.3 Key Hardware Concepts 78
3.3.1 Circuits 78
3.3.2 Hardware Resources of the Datapath 79
3.3.3 Edge-Triggered Logic 80
3.3.4 Connecting the Datapath Elements 82
3.3.5 Toward Bus-Based Design 86
3.3.6 Finite State Machine (FSM) 89
3.4 Datapath Design 91
3.4.1 ISA and Datapath Width 93
3.4.2 Width of the Clock Pulse 94
3.4.3 Checkpoint 95
3.5 Control Unit Design 95
3.5.1 ROM Plus State Register 96
3.5.2 FETCH Macro State 99
3.5.3 DECODE Macro State 102
3.5.4 EXECUTE Macro State: ADD Instruction (Part of R-Type) 103
3.5.5 EXECUTE Macro State: NAND Instruction (Part of R-Type) 106
3.5.6 EXECUTE Macro State: JALR Instruction (Part of J-Type) 106
3.5.7 EXECUTE Macro State: LW Instruction (Part of I-Type) 108
3.5.8 EXECUTE Macro State: SW and ADDI Instructions (Part of I-Type) 111
3.5.9 EXECUTE Macro State: BEQ Instruction (Part of I-Type) 112
3.5.10 Engineering a Conditional Branch in the Microprogram 116
3.5.11 DECODE Macro State Revisited 116
3.6 Alternative Style of Control Unit Design 119
3.6.1 Microprogrammed Control 119
3.6.2 Hardwired Control 119
3.6.3 Choosing Between the Two Control Design Styles 121
Summary 122
Historical Perspective 122
Exercises 124
Bibliographic Notes and Further Reading 128

Chapter 4 Interrupts, Traps, and Exceptions 129
4.1 Discontinuities in Program Execution 130
4.2 Dealing with Program Discontinuities 132
4.3 Architectural Enhancements to Handle Program Discontinuities 135
4.3.1 Modifications to FSM 136
4.3.2 A Simple Interrupt Handler 137
4.3.3 Handling Cascaded Interrupts 138
4.3.4 Returning from the Handler 141
4.3.5 Checkpoint 143
4.4 Hardware Details for Handling Program Discontinuities 143
4.4.1 Datapath Details for Interrupts 143
4.4.2 Details of Receiving the Address of the Handler 146
4.4.3 Stack for Saving/Restoring 147
4.5 Putting It All Together 149
4.5.1 Summary of Architectural/Hardware Enhancements 149
4.5.2 Interrupt Mechanism at Work 150
Summary 152
Exercises 154
Bibliographic Notes and Further Reading 155

Chapter 5 Processor Performance and
Pipelined Processor Design 156
5.1 Space and Time Metrics 156
5.2 Instruction Frequency 160
5.3 Benchmarks 161
5.4 Increasing the Processor Performance 165
5.5 Speedup 167
5.6 Increasing the Throughput of the Processor 171
5.7 Introduction to Pipelining 171
5.8 Toward an Instruction-Processing Assembly Line 172
5.9 Problems with a Simple-Minded Instruction Pipeline 175
5.10 Fixing the Problems with the Instruction Pipeline 176
5.11 Datapath Elements for the Instruction Pipeline 178
5.12 Pipeline-Conscious Architecture and Implementation 180
5.12.1 Anatomy of an Instruction Passage Through the Pipeline 181
5.12.2 Design of the Pipeline Registers 184
5.12.3 Implementation of the Stages 185
5.13 Hazards 185
5.13.1 Structural Hazard 186
5.13.2 Data Hazard 187
5.13.3 Control Hazard 200
5.13.4 Summary of Hazards 209
5.14 Dealing with Program Discontinuities in a Pipelined Processor 211
5.15 Advanced Topics in Processor Design 214
5.15.1 Instruction-Level Parallelism 214
5.15.2 Deeper Pipelines 215
5.15.3 Revisiting Program Discontinuities in the Presence of Out-Of-Order Processing 218
5.15.4 Managing Shared Resources 219
5.15.5 Power Consumption 221
5.15.6 Multicore Processor Design 221
5.15.7 Intel Core Microarchitecture: An Example Pipeline 222
Summary 225
Historical Perspective 225
Exercises 226
Bibliographic Notes and Further Reading 231

Chapter 6 Processor Scheduling 233
6.1 Introduction 233
6.2 Programs and Processes 235
6.3 Scheduling Environments 239
6.4 Scheduling Basics 242
6.5 Performance Metrics 245
6.6 Nonpreemptive Scheduling Algorithms 247
6.6.1 First-Come First-Served (FCFS) 247
6.6.2 Shortest Job First (SJF) 252
6.6.3 Priority 255
6.7 Preemptive Scheduling Algorithms 256
6.7.1 Round Robin Scheduler 259
6.8 Combining Priority and Preemption 264
6.9 Meta-Schedulers 264
6.10 Evaluation 265
6.11 Impact of Scheduling on Processor Architecture 267
Summary and a Look Ahead 268
Linux Scheduler—A Case Study 270
Historical Perspective 273
Exercises 275
Bibliographic Notes and Further Reading 276

Chapter 7 Memory Management Techniques 277
7.1 Functionalities Provided by a Memory Manager 278
7.2 Simple Schemes for Memory Management 280
7.3 Memory Allocation Schemes 285
Intel’s Memory Architecture 312
Exercises 314
Bibliographic Notes and Further Reading 315

Chapter 8 Details of Page-Based
Memory Management 316
8.1 Demand Paging 316
8.1.1 Hardware for Demand Paging 317
8.1.2 Page Fault Handler 318
8.1.3 Data Structures for Demand-Paged Memory Management 318
8.1.4 Anatomy of a Page Fault 320
7.3.1 Fixed-Size Partitions 286
7.3.2 Variable-Size Partitions 287
7.3.3 Compaction 289
7.4 Paged Virtual Memory 290
7.4.1 Page Table 293
7.4.2 Hardware for Paging 295
7.4.3 Page Table Setup 296
7.4.4 Relative Sizes of Virtual and Physical Memories 296
7.5 Segmented Virtual Memory 297
7.5.1 Hardware for Segmentation 303
7.6 Paging versus Segmentation 303
7.6.1 Interpreting the CPU-Generated Address 306
Summary 308
Historical Perspective 309
MULTICS 311
8.2 Interaction Between the Process Scheduler and Memory Manager 324
8.3 Page Replacement Policies 326
8.3.1 Belady’s Min 326
8.3.2 Random Replacement 327
8.3.3 First In First Out (FIFO) 327
8.3.4 Least Recently Used (LRU) 329
8.3.5 Second Chance Page Replacement Algorithm 333
8.3.6 Review of Page Replacement Algorithms 336
8.4 Optimizing Memory Management 336
8.4.1 Pool of Free Page Frames 336
8.4.2 Thrashing 338
8.4.3 Working Set 340
8.4.4 Controlling Thrashing 341
8.5 Other Considerations 343
8.6 Translation Lookaside Buffer (TLB) 343
8.6.1 Address Translation with TLB 344
8.7 Advanced Topics in Memory Management 346
8.7.1 Multi-Level Page Tables 346
8.7.2 Access Rights As Part of the Page Table Entry 348
8.7.3 Inverted Page Tables 349
Summary 349
Exercises 350
Bibliographic Notes and Further Reading 352

Chapter 9 Memory Hierarchy 353
9.1 The Concept of a Cache 354
9.2 Principle of Locality 355
9.3 Basic Terminologies 355
9.4 Multilevel Memory Hierarchy 357
9.5 Cache Organization 360
9.6 Direct-Mapped Cache Organization 360
9.6.1 Cache Lookup 363
9.6.2 Fields of a Cache Entry 365
9.6.3 Hardware for a Direct-Mapped Cache 366
9.7 Repercussion on Pipelined Processor Design 369
9.8 Cache Read/Write Algorithms 370
9.8.1 Read Access to the Cache from the CPU 370
9.8.2 Write Access to the Cache from the CPU 370
9.9 Dealing with Cache Misses in the Processor Pipeline 375
9.9.1 Effect of Memory Stalls Due to Cache Misses on Pipeline Performance 376
9.10 Exploiting Spatial Locality to Improve Cache Performance 377
9.10.1 Performance Implications of Increased Block Size 383
9.11 Flexible Placement 384
9.11.1 Fully Associative Cache 385
9.11.2 Set Associative Cache 387
9.11.3 Extremes of Set Associativity 387
9.12 Instruction and Data Caches 392
9.13 Reducing Miss Penalty 393
9.14 Cache Replacement Policy 394
9.15 Recapping Types of Misses 396
9.16 Integrating TLB and Caches 399
9.17 Cache Controller 400
9.18 Virtually Indexed Physically Tagged Cache 401
9.19 Recap of Cache Design Considerations 404
9.20 Main Memory Design Considerations 405
9.20.1 Simple Main Memory 405
9.20.2 Main Memory and Bus to Match Cache Block Size 406
9.20.3 Interleaved Memory 407
9.21 Elements of Modern Main Memory Systems 408
9.21.1 Page Mode DRAM 413
9.22 Performance Implications of Memory Hierarchy 415
Summary 416
Memory Hierarchy of Modern Processors—An Example 418
Exercises 419
Bibliographic Notes and Further Reading 422

Chapter 10 Input/Output and Stable Storage 423
10.1 Communication Between the CPU and the I/O Devices 423
10.1.1 Device Controller 424
10.1.2 Memory Mapped I/O 425
10.2 Programmed I/O 427
10.3 DMA 429
10.4 Buses 432
10.5 I/O Processor 433
10.6 Device Driver 434
10.6.1 An Example 436
10.7 Peripheral Devices 438
10.8 Disk Storage 440
10.8.1 The Saga of Disk Technology 448
10.9 Disk Scheduling Algorithms 451
10.9.1 First-Come-First-Served (FCFS) 453
10.9.2 Shortest Seek Time First (SSTF) 453
10.9.3 SCAN (Elevator Algorithm) 453
10.9.4 C-SCAN (Circular Scan) 455
10.9.5 LOOK and C-LOOK 455
10.9.6 Disk Scheduling Summary 457
10.9.7 Comparison of the Algorithms 458
10.10 Solid State Drive 459
10.11 Evolution of I/O Buses and Device Drivers 461
10.11.1 Dynamic Loading of Device Drivers 463
10.11.2 Putting it All Together 463
Summary 466
Exercises 466
Bibliographic Notes and Further Reading 468

Chapter 11 File System 469
11.1 Attributes 469
11.2 Design Choices in Implementing a File System
on a Disk Subsystem 476
11.2.1 Contiguous Allocation 477
11.2.2 Contiguous Allocation with Overflow Area 480
11.2.3 Linked Allocation 480
11.2.4 File Allocation Table (FAT) 481
11.2.5 Indexed Allocation 483
11.2.6 Multilevel Indexed Allocation 485
11.2.7 Hybrid Indexed Allocation 486
11.2.8 Comparison of the Allocation Strategies 490
11.3 Putting It All Together 491
11.3.1 i-node 497
11.4 Components of the File System 498
11.4.1 Anatomy of Creating and Writing Files 499
11.5 Interaction Among the Various Subsystems 500
11.6 Layout of the File System on the Physical Media 503
11.6.1 In Memory Data Structures 507
11.7 Dealing with System Crashes 508
11.8 File Systems for Other Physical Media 508
11.9 A Glimpse of Modern File Systems 509
11.9.1 Linux 509
11.9.2 Microsoft Windows 515
Summary 517
Exercises 518
Bibliographic Notes and Further Reading 520

Chapter 12 Multithreaded Programming
and Multiprocessors 521
12.1 Why Multithreading? 522
12.2 Programming Support for Threads 523
12.2.1 Thread Creation and Termination 523
12.2.2 Communication Among Threads 526
12.2.3 Read-Write Conflict, Race Condition, and Nondeterminism 528
12.2.4 Synchronization Among Threads 533
12.2.5 Internal Representation of Data Types Provided by the Threads Library 540
12.2.6 Simple Programming Examples 541
12.2.7 Deadlocks and Livelocks 546
12.2.8 Condition Variables 548
12.2.9 A Complete Solution for the Video Processing Example 553
12.2.10 Discussion of the Solution 554
12.2.11 Rechecking the Predicate 556
12.3 Summary of Thread Function Calls and Threaded Programming Concepts 559
12.4 Points to Remember in Programming with Threads 561
12.5 Using Threads as Software Structuring Abstraction 561
12.6 POSIX pthreads Library Calls Summary 562
12.7 OS Support for Threads 565
12.7.1 User Level Threads 567
12.7.2 Kernel-Level Threads 570
12.7.3 Solaris Threads: An Example of Kernel-Level Threads 572
12.7.4 Threads and Libraries 573
12.8 Hardware Support for Multithreading in a Uniprocessor 574
12.8.1 Thread Creation, Termination, and Communication Among Threads 574
12.8.2 Inter-Thread Synchronization 575
12.8.3 An Atomic Test-and-Set Instruction 575
12.8.4 Lock Algorithm with Test-and-Set Instruction 577
12.9 Multiprocessors 578
12.9.1 Page Tables 579
12.9.2 Memory Hierarchy 580
12.9.3 Ensuring Atomicity 582
12.10 Advanced Topics 583
12.10.1 OS Topics 583
12.10.2 Architecture Topics 596
12.10.3 The Road Ahead: Multicore and Many-Core
Architectures 610
Summary 612
Historical Perspective 612
Exercises 614
Bibliographic Notes and Further Reading 617

Chapter 13 Fundamentals of Networking
and Network Protocols 620
13.1 Preliminaries 620
13.2 Basic Terminologies 621
13.3 Networking Software 626
13.4 Protocol Stack 628
13.4.1 Internet Protocol Stack 628
13.4.2 OSI Model 631
13.4.3 Practical Issues with Layering 632
13.5 Application Layer 632
13.6 Transport Layer 634
13.6.1 Stop-and-Wait Protocols 636
13.6.2 Pipelined Protocols 640
13.6.3 Reliable Pipelined Protocol 642
13.6.4 Dealing with Transmission Errors 647
13.6.5 Transport Protocols on the Internet 648
13.6.6 Transport Layer Summary 651
13.7 Network Layer 652
13.7.1 Routing Algorithms 653
13.7.2 Internet Addressing 658
13.7.3 Network Service Model 663
13.7.4 Network Routing versus Forwarding 668
13.7.5 Network Layer Summary 668
13.8 Link Layer and Local Area Networks 670
13.8.1 Ethernet 670
13.8.2 CSMA/CD 671
13.8.3 IEEE 802.3 673
13.8.4 Wireless LAN and IEEE 802.11 674
13.8.5 Token Ring 675
13.8.6 Other Link-Layer Protocols 676
13.9 Networking Hardware 678
13.10 Relationship Between the Layers of the Protocol Stack 683
13.11 Data Structures for Packet Transmission 685
13.11.1 TCP/IP Header 687
13.12 Message Transmission Time 688
13.13 Summary of Protocol-Layer Functionalities 694
13.14 Networking Software and the Operating System 695
13.14.1 Socket Library 695
13.14.2 Implementation of the Protocol Stack in the Operating System 697
13.14.3 Network Device Driver 697
13.15 Network Programming Using UNIX Sockets 699
13.16 Network Services and Higher-Level Protocols 706
Summary 708
Historical Perspective 709
From Telephony to Computer Networking 709
Evolution of the Internet 712
PC and the Arrival of LAN 713
Evolution of LAN 713
Exercises 715
Bibliographic Notes and Further Reading 718

Chapter 14 Epilogue: A Look Back at the Journey 720
14.1 Processor Design 720
14.2 Process 720
14.3 Virtual Memory System and Memory Management 721
14.4 Memory Hierarchy 722
14.5 Parallel System 722
14.6 Input/Output Systems 722
14.7 Persistent Storage 723
14.8 Network 723
Concluding Remarks 723
Appendix: Network Programming with UNIX Sockets 724
Bibliography 736

序言

here is excitement when you talk to high school students about computers.There is a sense of mystery as to what is“inside the box”that makes the computer do such things as play video games with cool graphics.play music—_be it rap or symphony——send instant messages to friends,and so on.The purpose behind this textbook is to take the 10umey together to discover the mystery ofwhat is inside the box As a glimpse ofwhat is to come,let us say at the outset that what makes the box interesting is not 1USt the hardware,but also how the hardware and the system software work in tandem to make it a11 happen Therefore.the path we take in this book is to look at hardware and software together to see how one helps the other and how together they make the box interesting and useful We caU this approach“unraveling the box”一that is.resolving the mysteIT of what is in—side the box:We look inside the box and understand how to design the key hardware el—ements(processor,memory,and peripheral controllers)and the OS abstractions neededto manage all the hardware resources inside a computer,including processor,memory,I/0 and disk,multiple processors,and network.Hence,this is a textbook for a first course in computer systems embodying a novel integrated approach to these topics.

相關詞條

熱門詞條

聯絡我們