1. 【Better Explained】
  2. Part I Program Structure and Execution
    1. Representing and Manipulating Information
      1. 【Better Explained】
        1. 01——计算机上的信息和数据的真实面目,最本质的存在形态。一切其它形态均源于此形态——内存级别的16进制和二进制数
      2. Computer numbering formats http://en.wikipedia.org/wiki/Computer_numbering_formats
        1. 【Better Explained】
          1. Arithmetic overflow
          2. Byte order or endianness
          3. big-endian
          4. 内存中与平时书写习惯一致 也是网络字节序
          5. little-endian
          6. Linux
          7. Conversions Between Signed and Unsigned
          8. 字长相等
          9. C 中 同样字长的 unsigned 和 signed 转换时 ,数值可能改变(正数不变),内存中的位不变。
          10. 字长不等
          11. Expanding the Bit Representation of a Number
          12. 【Better Explained】
          13. short foo=-12345; (unsigned) foo 等价于 (unsigned) (int) foo 值为4294954951 (unsigned) (unsigned short) foo 值为53191【思考下很有意思!】
          14. short 先变成 unsigned 【符号扩展】 之后在从有符变成无符【机器解释】【CSapp的解释】
          15. 我的解释当进行类型转换时先确定被转换数据类型【有符还是无符的?】若是有符的进行符号扩展【根据当前所有数据二进制位在在当前这种类型里是正还是负,并在保持正负性的情况下进行的符号扩展】,完了!不存在有符到无符转换这一歩,实际上这一歩是 Cpu 根据 context 【这个 context 是 printf 的输出符号或者是声明定义类型时确定的】信息 自动判定是有符还是无符。也就是整个转换其实只有符号扩展这一歩而已。
          16. Zero extension
          17. 无符数转换成更大的数据
          18. Sign extension
          19. 有符数转换成更大的数据
          20. Truncating Numbers
          21. x mod (2的k次方)
        2. Data types
          1. Integer
          2. Unsigned Encodings
          3. B2Uw
          4. UMaxw
          5. Signed number representations
          6. Sign-and-magnitude method
          7. Ones' complement
          8. Two’s-complement
          9. Excess-N
          10. Base −2
          11. Floating point http://en.wikipedia.org/wiki/Floating_point
          12. IEEE 754-2008
          13. Rounding algorithms
          14. Roundings to nearest
          15. Round to nearest, ties away from zero
          16. Round to nearest, ties to even
          17. Directed roundings
          18. Round toward −∞
          19. Round toward 0
          20. Round toward +∞
          21. IEEE 754: floating point in modern computers
          22. 【Better Explained】
          23. exp 是 unsigned
          24. 格式化的
          25. exp不为0或255 significand为小于1大于等于0的任意值 s为0或1
          26. 非格式化的
          27. 正负零
          28. exp全为0 significand 全为0值 s为0或1
          29. gradual underflow
          30. exp全为0 significand 为非0值 s为0或1
          31. 特殊值
          32. +∞ -∞
          33. exp全为1 significand 全为 0 s为0或1
          34. NaN
          35. Categories of single-precision, floating-point values.
          36. Internal representation
          37. Half
          38. 10-bit significand
          39. Single
          40. 24-bit significand
          41. Double
          42. 53-bit significand
          43. Extended
          44. 64-bit significand
          45. Quad
          46. 112-bit significand
          47. Special values
          48. Signed zero
          49. Subnormal numbers
          50. Infinities
          51. NaNs
          52. word
          53. 32或64个二进制位(mathematics)大小的数。这里的位区分于最小内存的单元的位(computer)
        3. Numeral system
          1. Octal
          2. Hexadecimal Notation
          3. Two’s-Complement Encodings
          4. Unsigned Encodings
          5. Binary numeral system http://en.wikipedia.org/wiki/Binary_numeral_system#Fractions_in_binary
          6. Fractional Binary Numbers
          7. 就是 Fixed point
      3. Character encoding http://en.wikipedia.org/wiki/Character_encoding
        1. UTF-8
        2. UTF-16
        3. GB2312
        4. 。。。
      4. Computational complexity of mathematical operations http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
        1. Arithmetic
          1. Integer Arithmetic
          2. 【Better Explained】
          3. 计算机执行整数运算实际上是一种模运算形式
          4. Arithmetic overflow
          5. Abelian group
          6. Binary multiplier
          7. Unsigned Addition
          8. overflow
          9. Two’s-Complement Addition
          10. Negative overflow
          11. Positive overflow
          12. Two’s-Complement Negation
          13. Complement and Increment
          14. Complement Upper Bits
          15. Unsigned Multiplication
          16. 模数乘法罢了
          17. Two’s-Complement Multiplication
          18. http://www1.hrbust.edu.cn/zuzhijigou/metc/material/zcyl/Chap02/2.3.2.htm 【注意计算时符号位的积用()括起来 ,方便区分】
          19. Multiplying by Constants
          20. 乘法转换成移位和加减
          21. Dividing by Powers of Two
          22. Floating-point arithmetic operations
        2. Bitwise operation
          1. 【Better Explained】
          2. Bool algebra
          3. Boolean ring
          4. Logical reasoning
          5. Deduction
          6. Induction
          7. Abduction
          8. 以为运算采取向零舍入【正负两个方向向令靠近】
          9. Bool Operations
          10. Bitwise NOT
          11. Bitwise AND
          12. Bitwise OR
          13. Bitwise XOR
          14. Bit shifts
          15. Arithmetic shift
          16. signed binary numbers
          17. Logical shift
          18. unsigned binary numbers
          19. Rotate no carry
          20. Rotate through carry
        3. Logical Operations【我加的,不在父词条中】
          1. Logical negation (NOT)
          2. Logical AND
          3. Logical OR
    2. Machine-Level Representation of Programs
      1. 【Better Explained】
        1. Computers execute machine code, sequences of bytes encoding the low-level operations that manipulate data, manage memory, read and write data on storage devices, and communicate over networks.
          1. 计算机上的数据信息动起来!!!通过把和数据相关的一些元操作【只是最常用和主要的】一一映射为一些字节序列【就是连续的二进制位罢了,呵呵】,再通过让CPU识别这些字节序列,完成相应的元操作。本质上,这些元操作对应的字节序列也是数据。
        2. Assembly code, a textual representation of the machine code giving the individual instructions in the program.
          1. 汇编代码就是这些元操作字节序列的,文本存在形态。本质上是没有区别的只是机器码是给机器看的,汇编代码是给人看的,呵呵
        3. 汇编指令与机器码地相互转换
          1. http://blog.csdn.net/jiangyuanfu/archive/2009/08/18/4456306.aspx
        4. Assembly Language
        5. x86
        6. x86 assembly language
      2. GNU toolchain http://en.wikipedia.org/wiki/GNU_toolchain
        1. 【Better Explained】
          1. Gnu as Document
          2. http://sourceware.org/binutils/docs-2.16/as/
        2. GNU make
        3. GNU Compiler Collection (GCC)
          1. The C preprocessor (cpp)
          2. gcc -E file.i
          3. The C compiler (cc1)
          4. gcc -S file.s
          5. The GNU Assembler(GAS,as)
          6. gcc -c file.o
          7. GNU linker (GNU ld)
          8. gcc -o file
        4. GNU Binutils
        5. GNU Bison
        6. GNU m4
        7. GNU Debugger (GDB)
          1. objdump maybe the best partner
        8. GNU build system (autotools)
          1. Autoconf
          2. Autoheader
          3. Automake
          4. Libtool
      3. AT&T assembly
        1. 【Better Explained】
          1. 操作数不能同时为 Memory reference
        2. Accessing Information
          1. x86 registers
          2. %eax和%ah是整体和部分的关系
          3. Operand Specifiers
          4. immediate
          5. In ATT-format assembly code, these are written with a ‘$’ followed by an integer using standard C notation, for example, $-577 or $0x1F.
          6. register
          7. Register, denotes the contents of one of the registers, either one of the eight 32-bit registers (e.g., %eax) for a double-word operation, one of the eight 16-bit registers (e.g., %ax) for a word operation, or one of the eight single-byte register elements (e.g., %al) for a byte operation.
          8. memory reference
          9. Operand forms
          10. Data Movement Instructions
          11. Data movement instructions
          12. With sign expansion, the upper bits of the destination are filled in with copies of the most significant bit of the source value.
        3. Arithmetic and Logical Operations
          1. Integer arithmetic operations
          2. Load Effective Address
          3. leal 的目的操作数必须是register
          4. Unary Operations
          5. Binary Operations
          6. Shift Operations
          7. Special Arithmetic Operations
        4. Control
          1. Condition Codes
          2. Accessing the Condition Codes
          3. SET指令
          4. CMP指令
          5. TEST指令
          6. Jump Instructions
      4. Translate C language in assembly language
        1. Conditional Branches
        2. Loops
        3. Conditional Move Instructions
        4. Switch Statements
        5. Array Allocation and Access
          1. Basic Principles
          2. Pointer Arithmetic
          3. Nested Arrays
          4. Fixed-Size Arrays
          5. Variable-Size Arrays
        6. Heterogeneous Data Structures
          1. Structures
          2. Unions
          3. Data Alignment
      5. Procedures
        1. Stack Frame Structure
        2. Transferring Control
        3. Register Usage Conventions
        4. Recursive Procedures
        5. Out-of-Bounds Memory References and Buffer Overflow
      6. x86-64
    3. Processor Architecture
      1. List of CPU architectures http://en.wikipedia.org/wiki/List_of_CPU_architectures
        1. Embedded CPU architectures
          1. ARM's ARM Architecture
          2. Intel's 8051 architecture
          3. Zilog's Z80 architecture
          4. Western Design Center's 65816 architecture
          5. Hitachi's SuperH architecture
          6. Axis Communications' ETRAX CRIS architecture
          7. Power Architecture (formerly PowerPC)
        2. Microcomputer CPU architectures
          1. x86
          2. IA-32
          3. x86-64
          4. AMD64
          5. Intel 64
          6. Advanced RISC Machines' (originally Acorn) ARM
          7. Motorola's 6800 and 68000 architectures
          8. Power Architecture (formerly IBM POWER and PowerPC)
        3. Workstation/Server CPU architectures
          1. Intel's Itanium architecture (formerly IA-64)
          2. Sun Microsystems's SPARC architecture
          3. Power Architecture (formerly IBM POWER and PowerPC)
          4. DEC's Alpha architecture
        4. Mini/Mainframe CPU architectures
          1. IBM's System/360, System/370, ESA/390 and z/Architecture (1964-present)
          2. DEC's PDP-11 architecture, and its successor, the VAX architecture
        5. Mixed-core CPU architectures
          1. IBM's Cell architecture
      2. List of Instruction sets http://en.wikipedia.org/wiki/List_of_instruction_sets
        1. Intel
        2. AMD
        3. ARM
        4. Donald Knuth
        5. IBM
        6. Motorola
      3. ISA http://en.wikipedia.org/wiki/Instruction_set
      4. Cpu Design http://en.wikipedia.org/wiki/CPU_design
        1. 【Better Explained】
          1. CPU design focuses on these areas:
          2. datapaths (such as ALUs and pipelines)
          3. control unit: logic which controls the datapaths
          4. Memory components such as register files, caches
          5. Clock circuitry such as clock drivers, PLLs, clock distribution networks
          6. Pad transceiver circuitry
          7. Logic gate cell library which is used to implement the logic
          8. A CPU design project generally has these major tasks:
          9. Programmer-visible instruction set architecture, which can be implemented by a variety of microarchitectures
          10. Architectural study and performance modeling in ANSI C/C++ or SystemC
          11. High-level synthesis (HLS) or RTL (e.g. logic) implementation
          12. RTL Verification
          13. Circuit design of speed critical components (caches, registers, ALUs)
          14. Logic synthesis or logic-gate-level design
          15. Timing analysis to confirm that all logic and circuits will run at the specified operating frequency
          16. Physical design including floorplanning, place and route of logic gates
          17. Checking that RTL, gate-level, transistor-level and physical-level representations are equivalent
          18. Checks for signal integrity, chip manufacturability
        2. Y86
          1. Y86 Instruction set Architecture
          2. 【Better Explained】
          3. Y86 Programs
          4. Some Y86 Instruction Details
          5. push %esp
          6. pop %esp
          7. Processor's State
          8. Program Registers
          9. Condition Codes
          10. Program Counter (PC)
          11. Memory
          12. Program States
          13. AOK
          14. HLT
          15. ADR
          16. INS
          17. Instruction set
          18. Four movl instructions
          19. Four integer operation instructions
          20. The seven jump instructions
          21. Six conditional move instructions
          22. The call instruction pushes the return address on the stack and jumps to the destination address.
          23. The ret instruction returns from such a call.
          24. The pushl and popl instructions
          25. The halt instruction
          26. Instruction Encoding
          27. 【Better Explained】
          28. Initial byte
          29. the higher 4 bits , code part
          30. the lower 4 bits ,function part
          31. register specifier byte
          32. 4-byte constant word
          33. Exceptions
          34. Sequential Y86 Implementations
          35. Organizing Processing into Stages
          36. Fetch
          37. 取指
          38. Decode
          39. 译码
          40. Execute
          41. 执行
          42. Memory
          43. 访存
          44. Write back
          45. 写回
          46. PC update
          47. SEQ Hardware Structure
          48. Fetch
          49. Hardware Unit
          50. Instruction memory
          51. PC incrementer
          52. Control logic blocks
          53. Program stat
          54. Decode
          55. Hardware Unit
          56. Register file
          57. A
          58. B
          59. Control logic blocks
          60. srcA
          61. srcB
          62. Program stat
          63. Execute
          64. Hardware Unit
          65. ALU
          66. CC
          67. Control logic blocks
          68. ALU B
          69. ALU fun
          70. ALU A
          71. Program stat
          72. Memory
          73. Hardware Unit
          74. Virtual Memory System
          75. Control logic blocks
          76. Mem. control
          77. Addr
          78. Data
          79. Program stat
          80. Write back
          81. Hardware Unit
          82. Register file
          83. E
          84. M
          85. Control logic blocks
          86. dstM
          87. dstE
          88. Program stat
          89. PC update
          90. SEQ Timing
          91. 【Better Explained】
          92. The processor never needs to read back the state updated by an instruction in order to complete the processing of this instruction.
          93. Some rule in Y86
          94. PC is loaded with a new instruction address every clock cycle
          95. CC is loaded only when an integer operation instruction is executed.
          96. Virtual memory is written only when an rmmovl pushl or call instruction is executed.
          97. Hardware unit that require Clock Sequence control
          98. Clocked registers
          99. PC
          100. CC
          101. Random-access memories
          102. Virtual memory system
          103. The register file
          104. SEQ stage implementations
          105. Fetch Stage
          106. seq_fetch.png
          107. Decode & Write back
          108. seq_decode&writeback.png
          109. Execute Stage
          110. seq_execute.png
          111. Memory Stage
          112. seq_memory.png
          113. PC update Stage
          114. seq_PC.png
          115. Pipelined Y86 Implementations
          116. SEQ+: Rearranging the Computation Stages
          117. Circuit Retiming
          118. Inserting Pipeline Registers
          119. Rearranging and Relabeling Signals
          120. Next PC Prediction
          121. Branch Prediction
          122. Pipeline Hazards
          123. Data Hazard
          124. Program registers
          125. Control Hazard
          126. PC
          127. Mispredicted branches
          128. Ret instructions require special handing
          129. Avoiding Data Hazards by Stalling
          130. Avoiding Data Hazards by Forwarding
          131. Data forwarding
          132. Bypassing
          133. Load/Use Data Hazards
          134. One class of data hazards cannot be handled purely by forwarding,because memory reads occur late in the pipeline. 言外之意,就是内存中的值 valM 不像 valE 从一开始出现就是被钉死了的。本质原因还是出来的晚,CSAPP言简意赅。
          135. Load inerlock
          136. Exception Handling
          137. Inner Exception
          138. Halt
          139. illegal Instruction that combine code and fun
          140. Fetch,read or write a illegal address
          141. Excepting instruction
          142. Exception Handler
          143. This is the one for your OS
          144. Some details
          145. The higher priority,the deeper of a instruction,that will be reported to your OS
          146. A excepting instruction occurs after a mispredicted branch
          147. The different stage in pipeline will be updated by different instruction. For instance ,if there is some error in one stage of pipeline made by a malicious instruction will effect other instruction that should not be executed.
          148. Some mechanism
          149. Take stat and other information to Program stat and W_register
          150. PIPE Stage Implementations
          151. PC Selection and Fetch Stage
          152. Decode and Write-Back Stage
          153. Execute Stage
          154. Memory Stage
          155. Pipeline Control Logic
          156. Special logic control cases
          157. Processing ret
          158. The pipeline must stall until the ret instruction reaches the write-back stage.
          159. Load/use hazard
          160. The pipeline must stall for one cycle between an instruction that reads a value from memory and an instruction that uses this value.
          161. Mispredicted branches
          162. By the time the branch logic detects that a jump should not have been taken,several instructions at the branch target will have started down the pipeline. These instructions must be removed from the pipeline.
          163. Instruction squashing
          164. Exception
          165. 【Better Explained】
          166. Bad stuff
          167. There are two stage will make exception:Fetch and Memory
          168. There are three stage will update the new program stat:Execute Memory and Write-back
          169. If an instruction leads an exception ,we forbid the later instruction updating the programmer-visible stat. In addition , when the excepting instruction reaches the write-back stage,we stop the application.
          170. Detecting Special Control Conditions
          171. Pipeline Control Mechanisms
          172. Combinations of Control Conditions
          173. Control logic implementation
        3. Logic Design
          1. Tools
          2. Hardware Description Language HDL
          3. Verilog
          4. VHDL
          5. Hardware Control Language HCL
          6. 【Better Explained】
          7. The different between HCL's Combinational Circuits and C's logic expression
          8. 组合电路 随输入 实时 变化 ,C 语言的 逻辑表达式只在程序执行到时求值。
          9. 组合电路 操作数只有0和1 ,C 中0表FALSE,something else 表示TREUE
          10. C语言逻辑操作有短路属性,组合电路无
          11. Signal Declarations
          12. boolsig name ’C-expr’
          13. intsig name ’C-expr’
          14. Quoted Text
          15. quote ’string’
          16. Expressions
          17. HCL Boolean expressions
          18. 0
          19. Logic value 0
          20. 1
          21. Logic value 1
          22. name
          23. Named Boolean signal
          24. int -expr in {int -expr 1,int -expr 2, . . . ,int -expr k}
          25. Set membership test
          26. int -expr 1 == int -expr 2
          27. Equality test
          28. int -expr 1 != int -expr 2
          29. Not equal test
          30. int -expr 1 < int -expr 2
          31. Less than test
          32. int -expr 1 <= int -expr 2
          33. Less than or equal test
          34. int -expr 1 > int -expr 2
          35. Greater than test
          36. int -expr 1 >= int -expr 2
          37. Greater than or equal test
          38. !bool -expr
          39. NOT
          40. bool -expr 1 && bool -expr 2
          41. AND
          42. bool -expr 1 || bool -expr 2
          43. OR
          44. HCL Integer expressions
          45. Numbers
          46. Named integer signals
          47. Case expressions
          48. [bool -expr 1 : int -expr 1;bool -expr 2 : int -expr 2;...;];
          49. Blocks
          50. bool name = bool -expr;
          51. int name = int -expr;
          52. Logic synthesis
          53. General Principles of Pipelining
          54. Pipeline
          55. Limitations of Pipelining
          56. Nonuniform Partitioning
          57. Diminishing Returns of Deep Pipelining
          58. Pipelining a System with Feedback
          59. 【Better Explained】
          60. Introducing pipelining into a system containing feedback paths is a peril
          61. We must deal with feedback effects properly
          62. Data dependency
          63. Control dependency
        4. Hardware Units
          1. Circuit
          2. Logic Gates
          3. Combinational Circuits
          4. Bit equal
          5. MUX,multiplexor
          6. Word-Level Combinational Circuits
          7. Arithmetic logic unit ALU
          8. Sequential circuit
          9. Memory device
          10. 【Better Explained】
          11. Read and write a register at the same time
          12. 我们会看到一个从旧值到新值的变化
          13. Clocked registers
          14. PC
          15. CC
          16. Random-access memories
          17. Virtual memory system
          18. The register file
          19. The instruction memory
          20. Clocking
    4. Optimizing Program Performance http://en.wikipedia.org/wiki/Program_optimization
      1. Common theme
        1. Trade-offs
        2. Bottlenecks
        3. When to optimize
        4. Macros
        5. Time taken for optimization
        6. Platform dependent and independent optimizations
      2. 【Better Explained】
        1. Writing Efficient Programs
          1. Bentley's rules
          2. Algorithms and Data structures
          3. Write source code that the compiler can effectively optimize.
          4. Parallel Computing
        2. Understanding Processors
          1. 【Better Explained】
          2. Boundary
          3. Latency bound
          4. Throughput bound
          5. the reciprocal of issue time
          6. Overall Operation
          7. Nehalem
          8. Superscalar
          9. Multi operation per clock-cycle
          10. Out-of-order
          11. ICU Instruction Control Unit
          12. Instruction Cache
          13. Fetch Control
          14. Branch prediction
          15. Speculative execution
          16. Instruction Decode
          17. FIFO
          18. Retired
          19. Flushed
          20. Retirement unit
          21. Register file
          22. EU Execution Unit
          23. Functional units
          24. 【Better Explained】
          25. Register renaming
          26. Load
          27. Store
          28. FP add + integer
          29. FP mul/div +integer
          30. Branch +integer
          31. Data cache
          32. Functional Unit Performance
          33. 【Better Explained】
          34. Latency
          35. deep in pipeline
          36. Issue time
          37. Fully pipeline
          38. An Abstract Model of Processor Operation
          39. Transformation from machine code to data-flow
          40. Other performance stuffs
        3. Understanding Memory
          1. 【Better Explained】
          2. Write/read dependency
          3. the different
          4. Load Store
          5. Store Load
          6. Load unit
          7. Store unit
          8. Store buffer
        4. Life in the Real World: Performance Improvement Techniques
          1. High-level design. Choose appropriate algorithms and data structures for the problem at hand. Be especially vigilant to avoid algorithms or coding techniques that yield asymptotically poor performance.
          2. Basic coding principles. Avoid optimization blockers so that a compiler can generate efficient code.
          3. Eliminate excessive function calls. Move computations out of loops when possible. Consider selective compromises of program modularity to gain greater efficiency.
          4. Eliminate unnecessary memory references. Introduce temporary variables to hold intermediate results. Store a result in an array or global variable only when the final value has been computed.
          5. Low-level optimizations.
          6. Try various forms of pointer versus array code.
          7. Reduce loop overhead by unrolling loops.
          8. Find ways to make use of the pipelined functional units by techniques such as iteration splitting.
          9. A final word of advice to the reader is to be careful to avoid expending effort on misleading results.
      3. Algorithmic efficiency http://en.wikipedia.org/wiki/Algorithmic_efficiency
      4. Source code level
        1. 【Better Explained】
          1. Expressing Program Performance
          2. GHz:9 powers of 10 cycles per second
          3. CPE:Cycles Per Element
          4. strlen :Asymptotic inefficiency
        2. Loop optimizations
          1. Loop-invariant code motion
          2. Loop unrolling
          3. 【Better Explained】
          4. Reducing the auxiliary operations
          5. Conditional branch test
          6. Computing loop index
          7. Reducing the critical path operations
          8. The last index
        3. Enhancing Parallelism Reducing data dependencies
          1. Multiple Accumulators
          2. Loop unrolling , Multiple parallel
          3. IA32 Threshold 4
          4. x86-64 Threshold 12
          5. Reassociation Transformation
        4. Reducing procedure call
          1. Code Transformation and replacement
        5. Reducing memory reference
        6. Reducing conditional tests
          1. Trasformat Commanded style to Functional style
      5. Compiler optimization http://en.wikipedia.org/wiki/Optimizing_compiler
        1. 【Better Explained】
          1. Optimization Blocker
          2. Memory aliasing
          3. Function calls
        2. Optimization techniques
          1. Common themes
          2. Loop optimizations
          3. Data-flow optimizations
          4. SSA-based optimizations
          5. Code generator optimizations
          6. Functional language optimizations
          7. Other optimizations
      6. Assembly level
      7. Run time
        1. Identifying and Eliminating Performance Bottlenecks
          1. Program Profiling
          2. Gprof
          3. Using a Profiler to Guide Optimization
          4. Amdahl’s Law
    5. The Memory Hierarchy http://en.wikipedia.org/wiki/Memory_hierarchy
      1. 【Better Explained】
        1. Buses
          1. 【Better explained】
          2. North bridge
          3. South bridge
          4. Front side bus,FSB
          5. HyperTransport
          6. QuickPath
          7. A bus is a collection of parallel wires that carry address, data, and control signals
          8. bus transaction
          9. read transaction
          10. write transaction
          11. System bus
          12. Memory bus
          13. I/O bridge
          14. Bus interface
          15. I/O bus
          16. Universal Serial Bus,USB
          17. Graphics Card
          18. Peripheral Component Interconnect ,PCI
          19. Host bus
          20. SCSI scuzzy
          21. SATA satuh
      2. Storage Technologies
        1. Random-Access Memory http://en.wikipedia.org/wiki/RAM
          1. Volatile memory
          2. SRAM
          3. bistable latching circuitry
          4. DRAM
          5. capacitor within an integrated circuit
          6. Memory Modules
          7. DIMM
          8. SIMM
          9. Conventional DRAM
          10. supercell
          11. two-dimensional array
          12. RAS
          13. CAS
          14. Internal row buffer
          15. Enhanced DRAM
          16. FPM DRAM
          17. EDO DRAM
          18. SDRAM
          19. DDR SDRAM
          20. DDR (2 bits)
          21. DDR2 (4 bits)
          22. DDR3 (8 bits)
          23. RDRAM
          24. VRAM
          25. VRAM output is produced by shifting the entire contents of the internal buffer in sequence
          26. VRAM allows concurrent reads and writes to the memory
          27. Nonvolatile Memory
          28. PROM
          29. EPROM
          30. EEPROM
          31. Flash
          32. ROM
          33. Firmware
        2. Disk Storage
          1. Disk drive
          2. platter
          3. Two surfaces
          4. magnetic recording material
          5. spindle
          6. revolutions per minute (RPM)
          7. Track
          8. Sector
          9. Gap
          10. formatting bits
          11. Cylinder
          12. Disk Capacity
          13. Recoding density
          14. Track density
          15. Areal density
          16. Multiple zone recording
          17. Recoding zone
          18. Disk Operation
          19. Read/write head
          20. At any point in time, all heads are positioned on the same cylinder
          21. Disks read and write data in sector-sized blocks
          22. Actuator arm
          23. radial axis
          24. Seek
          25. Head crash
          26. Access time
          27. Seek time
          28. Rotational latency
          29. Transfer time
          30. Logical Disk Blocks
          31. disk controller
          32. a small buffer on the controller
          33. Firmware
          34. logical block number
          35. (surface, track, sector) triple
          36. logical blocks
          37. Accessing Disks
          38. Memory-mapped I/O
          39. I/O port
          40. Direct Memory Access,DMA
        3. Solid State Disks
          1. USB
          2. SATA
          3. Flash chip
          4. Flash translation layer
          5. Block
          6. Page
      3. Locality
        1. 【Better Explained】
          1. they tend to reference data items that are near other recently referenced data items, or that were recently referenced themselves
          2. principle of locality
          3. temporal locality
          4. a memory location that is referenced once is likely to be referenced again multiple times in the near future
          5. spatial locality
          6. if a memory location is referenced once, then the program is likely to reference a nearby memory location in the near future
        2. Locality of References to Program Data
          1. Stride-1 reference pattern
          2. Sequential reference pattern
          3. stride-k reference pattern
        3. Locality of Instruction Fetches
        4. Summary of Locality
          1. Programs that repeatedly reference the same variables enjoy good temporal locality
          2. the smaller the stride the better the spatial locality
          3. The smaller the loop body and the greater the number of loop iterations, the better the locality
      4. The Memory Hierarchy
        1. Caching
          1. 【Better Explained】
          2. chunk
          3. block
          4. Transfer unit
          5. Cache Hits
          6. Cache Misses
          7. Replacing
          8. Evicting
          9. Victim block
          10. Replacement policy
          11. LRU
          12. Kinds of Cache Misses
          13. Cold cache
          14. Warmed up
          15. Compulsory miss/Cold miss
          16. Conflict miss
          17. Working set
          18. Capacity miss
          19. Cache Management
          20. compiler
          21. Register
          22. MMU
          23. TLB
          24. hardware logic built into the caches
          25. L1
          26. L2
          27. L3
          28. OS & address translation hardware on the CPU
          29. DRAM
        2. Cache Memories
          1. Generic Cache Memory Organization
          2. General organization of cache (S; E; B;m)
          3. cache
          4. cache set
          5. cache line
          6. block
          7. m address bits t tag bits, s set index bits, b block offset bits
          8. Direct-Mapped Caches
          9. Set Selection
          10. s
          11. Line Matching
          12. valid bit
          13. tag bit
          14. block offset bit
          15. Word Selection
          16. Line Replacement on Misses
          17. Conflict Misses
          18. Thrash
          19. Set Associative Caches
          20. E-way set associative cache
          21. Set Selection
          22. Line Matching
          23. Word Selection
          24. Line Replacement on Misses
          25. LRU
          26. LFU
          27. Fully Associative Caches
          28. Set Selection
          29. Only one
          30. Line Matching
          31. Word Selection
          32. Issues with Writes
          33. Write hit
          34. Write through
          35. Write back
          36. Dirty bit
          37. Write miss
          38. Write allocate
          39. Not write allocate
        3. Core i7 Cache Hierarchy
          1. Core 0
          2. L1 i -cache
          3. L1 d -cache
          4. L2 Unified cache
          5. L3 Unified cache
        4. Performance Impact of Cache Parameters
          1. Metrics
          2. Miss rate
          3. Hit rate
          4. Hit time
          5. Miss penalty
          6. Cache Size
          7. Block Size
          8. Associativity E
          9. L1 L2 8
          10. L3 16
          11. Write Strategy
      5. Writing Cache-friendly Code
        1. Make the common case go fast
        2. Minimize the number of cache misses in each inner loop
        3. Repeated references to local variables are good because the compiler can cache them in the register file (temporal locality)
        4. Stride-1 reference patterns are good because caches at all levels of the memory hierarchy store data as contiguous blocks (spatial locality)
      6. The Impact of Caches on Program Performance
        1. The Memory Mountain
  3. Part II Running Programs on a System
    1. Linking
      1. 【Better Explained】
        1. Time of the Linking
          1. Compile time
          2. Load time
          3. Run time
        2. Separate compilation
        3. Benefits of Understanding Linking
          1. Build large programs
          2. Avoid dangerous programming errors
          3. Understand how language scoping rules are implemented
          4. Understand other important systems concepts
          5. Exploit shared libraries
      2. Linking
        1. Symbol resolution
          1. C++ Java Mangling demangling
          2. Multiply Defined Global Symbols
          3. Uninitialized global variables get weak symbols.
          4. Functions and initialized global variables get strong symbols.
          5. Rule 1: Multiple strong symbols are not allowed.
          6. Rule 2: Given a strong symbol and multiple weak symbols, choose the strong symbol.
          7. Rule 3: Given multiple weak symbols, choose any of the weak symbols.
          8. Resolve References with Static Libraries
          9. U
          10. E
          11. D
        2. Relocation
          1. Relocating sections and symbol definitions
          2. The linker then assigns run-time memory addresses to the new aggregate sections, to each section defined by the input modules, and to each symbol defined by the input modules.
          3. Relocating symbol references within sections
          4. Relocation Entries
          5. relocation types
          6. R 386 PC32
          7. R 386 32
          8. Relocating PC-Relative References
          9. Relocating Absolute References
        3. Static Linking
          1. Linking with Static Libraries
          2. The general rule for libraries is to place them at the end of the command line.
          3. On the other hand, if the libraries are not independent, then they must be ordered
          4. Libraries can be repeated on the command line if necessary to satisfy the dependence requirements.
        4. Dynamic Linking
          1. Shared libraries
          2. .interp
          3. JNI
          4. Linux API
          5. PIC
          6. GOT
          7. .data
          8. PLT
          9. .test
          10. PIC Data References
          11. performance disadvantages
          12. PIC Function Calls
          13. lazy binding
          14. Loading and Linking Shared Libraries from Applications
          15. Life in real world
          16. Distribution Software
          17. Building high-performance web server
      3. Object Files
        1. Tools for Manipulating Object Files
          1. objdump
          2. readelf
          3. ar
          4. strings
          5. strip
          6. nm
          7. size
          8. ldd
        2. Object file formats
          1. a.out
          2. COFF
          3. PE
          4. ELF
        3. Relocatable object file
          1. Typical ELF relocatable object file.png
          2. symbol table
          3. Global symbols defined by module that located file in C or Class in C++ Java
          4. Not include static global variables
          5. Global symbols defined by some other module
          6. Local symbols
          7. Only Include static global variables and functions. not local variables
        4. Executable object file
          1. Typical ELF executable object file
          2. objdump -dx
          3. execve loader _start
          4. Linux run-time memory image.png
        5. Shared object file
    2. Exceptional Control Flow
      1. 【Better Explained】
        1. control transfer
        2. control flow of the processor.
          1. “smooth” sequence
          2. ECF
          3. Hardware level
          4. operating systems level
          5. application level
        3. Benefits of understand ECF
          1. understand important systems concepts
          2. understand how applications interact with the operating system
          3. write interesting new application programs
          4. understand concurrency
          5. understand how software exceptions work
        4. Tools for Manipulating Processes
          1. strace
          2. ps
          3. top
          4. pmap
          5. /proc
      2. Exceptions
        1. 【Better Explained】
          1. An exception is an abrupt change in the control flow in response to some change in the processor’s state.
          2. processor’s state
          3. event
          4. related to current instruction
          5. unrelated to current instruction
          6. The differences to Procedure call
          7. Return
          8. Procedure call,pop return address from the stack
          9. Depending on the class of exception
          10. Exception handlers run in kernel mode
          11. Push some additional processor state onto the stack,EFLAGS
          12. If control is being transferred from a user program to the kernel, all of these items are pushed onto the kernel’s stack rather than onto the user’s stack.
          13. It looks like meaning that exception occurs in user mode rather than kernel mode
          14. Asynchronous exceptions occur as a result of events in I/O devices that are external to the processor
          15. Synchronous exceptions occur as a direct result of executing an instruction
          16. system-level functions
          17. system calls
          18. The standard C library
        2. Exception Handling
          1. exception number
          2. exception table
          3. exception table base register
          4. exception handler
          5. Exception return
          6. depending on the type of event that caused the exception
          7. Icurr
          8. Inext
          9. Abort
        3. Classes of Exceptions
          1. Interrupts
          2. Signal from I/O device
          3. Async
          4. Handler returns to next instruction
          5. Traps
          6. Intentional exception
          7. Sync
          8. Handler returns to instruction following the syscall
          9. Faults
          10. Potentially recoverable error
          11. Sync
          12. Handler either reexecutes current instruction or aborts.
          13. Aborts
          14. Nonrecoverable error
          15. Sync
          16. Handler returns to abort routine
        4. Exceptions in Linux/IA32 Systems
          1. exception types
          2. 0 to 31 exceptions defined by the Intel architects
          3. 0 Divide error
          4. Fault
          5. 13 General protection fault
          6. Segmentation faults
          7. Fault
          8. 14 Page fault
          9. Fault
          10. 18 Machine check
          11. Abort
          12. 32 to 255 interrupts and traps defined by Linux
          13. 128 0x80 System call
          14. Trap
          15. Linux/IA32 System Calls
          16. By convention, %eax %ebx, %ecx, %edx, %esi,%edi, and %ebp
      3. Processes
        1. Context
          1. program’s codeand data stored in memory
          2. the contents of its general-purpose registers
          3. the program counter
          4. status registers
          5. the floating-point registers
          6. user’s stack
          7. kernel’s stack
          8. environment variables
          9. various kernel data structures
          10. page table
          11. process table
          12. file table
        2. abstractions
          1. Logical control flow
          2. A sequence of PC values
          3. Corresponded exclusively to instructions contained in our program’s executable object file or in shared objects linked into our program dynamically at run time.
          4. Instances
          5. Exception handlers, processes, signal handlers, threads, and Java processes
          6. Concurrent Flows
          7. A logical flow whose execution overlaps in time with another flow
          8. concurrency
          9. The general phenomenon of multiple flows executing concurrently
          10. multitasking / Time slicing
          11. The notion of a process taking turns with other processes
          12. time slice
          13. Each time period that a process executes a portion of its flow
          14. parallel flows
          15. If two flows are running concurrently on different processor cores or computers
          16. Private address space
          17. Private
          18. Can not be accessed by other process
          19. 0x08048000
          20. 0x00400000
        3. User and Kernel Modes
          1. Processors typically provide this capability with a mode bit in some control register that characterizes the privileges that the process currently enjoys.
          2. /proc
          3. exports the contents of many kernel data structures as a hierarchy of text files
          4. /sys
          5. exports additional low-level information about system buses and devices
        4. Process scheduling
          1. At certain points during the execution of a process, the kernel can decide to preempt the current process and restart a previously preempted process
        5. Context Switches
          1. (1) saves the context of the current process
          2. (2) restores the saved context of some previously preempted process
          3. (3) passes control to this newly restored process
        6. Process Control
          1. Obtaining Process IDs
          2. Creating and Terminating Processes
          3. process states
          4. Running.
          5. Stopped
          6. Terminated
          7. fork
          8. Call once, return twice
          9. Concurrent execution
          10. Duplicate but separate address spaces
          11. Shared files
          12. process hierarchy
          13. Reaping Child Processes
          14. zombie
          15. waitpid
          16. Determining the Members of the Wait Set
          17. pid > 0
          18. pid = -1
          19. Modifying the Default Behavior
          20. 0
          21. WNOHANG
          22. WUNTRACED
          23. Checking the Exit Status of a Reaped Child
          24. WIFEXITED(status)
          25. WEXITSTATUS(status)
          26. WIFSIGNALED(status)
          27. WTERMSIG(status)
          28. WIFSTOPPED(status)
          29. WSTOPSIG(status)
          30. Error Conditions
          31. -1
          32. ECHILD
          33. EINTR
          34. Putting Processes to Sleep
          35. sleep
          36. 0
          37. >0
          38. pause
          39. Loading and Running Programs
          40. execve
          41. called once and never returns or -1
          42. Typical organization of the user stack when a new program starts.png
          43. getenv
          44. setenv
          45. unsetenv
          46. Using fork and execve to Run Programs
          47. shell
          48. The read step reads a command line from the user
          49. The evaluate step parses the command line and runs programs on behalf of the user.
          50. Web servers
        7. Process Groups
          1. Every process belongs to exactly one process group
          2. getpgrp
          3. setpgid
      4. Signals
        1. Linux signals
          1. A signal is a message that notifies a process that an event of some type has occurred in the system.
          2. Default action
          3. Dumping core
          4. Years ago, main memory was implemented with a technology known as core memory.
          5. an historical term that means writing an image of the code and data memory segments to disk.
          6. SIGTRAP
          7. SIGABRT
          8. SIGFPE
          9. SIGSEGV
          10. ignore
          11. SIGCHLD
          12. SIGCONT
          13. SIGURG
          14. SIGWINCH
          15. stop until next SIGCONT
          16. SIGSTOP
          17. SIGTSTP
          18. SIGTTIN
          19. SIGTTOU
          20. neither be caught nor ignored
          21. terminate*
          22. SIGKILL
          23. stop until next SIGCONT*
          24. SIGSTOP
          25. Each signal type corresponds to some kind of system event. Low-level hardware exceptions are processed by the kernel’s exception handlers and would not normally be visible to user processes. Signals provide a mechanism for exposing the occurrence of such exceptions to user processes.
        2. Signal Terminology
          1. Sending a signal
          2. The kernel sends (delivers) a signal to a destination process by updating some state in the context of the destination process.
          3. delivered reasons
          4. (1) the kernel has detected a system event
          5. a divide-by-zero error
          6. the termination of a child process
          7. (2) A process has invoked the kill function
          8. A process can send a signal to itself.
          9. How to send
          10. the kill Program
          11. the Keyboard
          12. the kill Function
          13. the alarm Function
          14. Receiving a signal
          15. A destination process receives a signal when it is forced by the kernel to react in some way to the delivery of the signal.
          16. pending signal
          17. A signal that has been sent but not yet received
        3. Signal Handling Issues
          1. Pending signals can be blocked
          2. Unix signal handlers typically block pending signals of the type currently being processed by the handler
          3. Pending signals are not queued
          4. The crucial lesson is that signals cannot be used to count the occurrence of events in other processes.
          5. System calls can be interrupted
          6. Portable Signal Handling
          7. sigaction
          8. Only signals of the type currently being processed by the handler are blocked
          9. As with all signal implementations, signals are not queued
          10. Once the signal handler is installed, it remains installed
          11. Interrupted system calls are automatically restarted whenever possible
        4. Race
          1. A good method to expose your race in code
          2. Coin of sleep
      5. Nonlocal Jumps
        1. An important application of nonlocal jumps is to permit an immediate return from a deeply nested function call, usually as a result of detecting some error condition
        2. Another important application of nonlocal jumps is to branch out of a signal handler to a specific code location, rather than returning to the instruction that was interrupted by the arrival of the signal.
        3. setjmp
          1. called once but returns multiple times
        4. longjmp
          1. called once but never returns
    3. Virtual Memory http://en.wikipedia.org/wiki/Virtual_memory
      1. 【Better Explained】
        1. three important capabilities
          1. It uses main memory efficiently by treating it as a cache for an address space stored on disk, keeping only the active areas in main memory, and transferring data back and forth between disk and memory as needed
          2. It simplifies memory management by providing each process with a uniform address space
          3. It protects the address space of each process from corruption by other processes
        2. Benefits of understand Virtual Memory
          1. Virtual memory is central
          2. Virtual memory is powerful
          3. Virtual memory is dangerous
        3. Address Spaces
          1. LAS
          2. VAS
          3. PAS
        4. Core i7/Linux Memory System
          1. Core i7 Address Translation
          2. Linux Virtual Memory System
          3. Linux Virtual Memory Areas(also called segments)
          4. An area is a contiguous chunk of existing (allocated) virtual memory whose pages are related in some way
          5. task_struct
          6. mm_struct
          7. pgd
          8. vm_area_structs
          9. vm_start
          10. vm_end
          11. vm_prot
          12. vm_flags
          13. vm_next
          14. Linux Page Fault Exception Handling
          15. Is virtual address legal?
          16. segmentation fault: accessing a non-existing page
          17. Is the attempted memory access legal?
          18. protection exception:
          19. normal page fault
          20. Memory Mapping
          21. 【Better explained】
          22. initializes the contents of a virtual memory area by associating it with an object on disk
          23. swap file/space/area,maintained by the kernel.
          24. A object mapped into physical memory that the physical pages are not necessarily contiguous
          25. A virtual memory area that a shared object is mapped into is often called a shared area. Similarly for a private area
          26. Mapping object
          27. Regular file in the Unix filesystem
          28. Anonymous file
          29. created by the kernel, that contains all binary zeros
          30. no data is actually transferred between disk and memory
          31. demand-zero pages
          32. Shared Objects
          33. shared object
          34. It is visible to any other processes that have also mapped the shared object into their virtual memory
          35. The changes are also reflected in the original object on disk
          36. private object
          37. not visible to other processes
          38. not reflected back to the object on disk
          39. private copy-on-write
          40. The fork Function
          41. The execve Function
          42. Delete existing user areas
          43. Map private areas
          44. Map shared areas
          45. Set PC
          46. User-level Memory Mapping
          47. Dynamic Memory Allocation
          48. brk ptr
          49. Explicit allocators
          50. malloc calloc realloc free
          51. sbrk
          52. Why Dynamic Memory Allocation?
          53. Often we do not know the sizes of certain data structures until the program actually runs
          54. Allocator Requirements and Goals
          55. Requirements
          56. Handling arbitrary request sequences
          57. Making immediate responses to requests
          58. Using only the heap
          59. Aligning blocks (alignment requirement)
          60. The allocator must align blocks in such a way that they can hold any type of data object. On most systems, this means that the block returned by the allocator is aligned on an eight-byte (double-word) boundary.
          61. Not modifying allocated blocks
          62. Goals
          63. 【Better Explained】
          64. aggregate payload
          65. peak utilization
          66. Maximizing throughput
          67. throughput is defined as the number of requests that it completes per unit time
          68. Maximizing memory utilization
          69. In fact, the total amount of virtual memory allocated by all of the processes in a system is limited by the amount of swap space on disk
          70. Fragmentation
          71. internal fragmentation
          72. external fragmentation
          73. Implementation
          74. Free block organization
          75. Implicit Free Lists
          76. block
          77. header
          78. payload
          79. padding
          80. Explicit Free Lists
          81. Simple Segregated Storage
          82. Placement
          83. first fit
          84. next fit
          85. best fit
          86. Segregated Fits
          87. Buddy Systems
          88. Splitting
          89. Coalescing
          90. false fragmentation
          91. immediate coalescing
          92. deferred coalescing
          93. Coalescing with Boundary Tags
          94. footer
          95. Getting Additional Heap Memory
          96. sbrk
          97. Implicit allocators /Garbage collectors
          98. Garbage Collector Basics
          99. directed reachability graph
          100. a set of root nodes
          101. a set of heap nodes
          102. conservative garbage collectors
          103. Mark&Sweep Garbage Collectors
          104. Conservative Mark&Sweep for C Programs
          105. Common Memory-Related Bugs in C
          106. Dereferencing Bad Pointers
          107. Reading Uninitialized Memory
          108. Allowing Stack Buffer Overflows
          109. Assuming that Pointers and the Objects they Point to Are the Same Size
          110. Making Off-by-One Errors
          111. Referencing a Pointer Instead of the Object it Points To
          112. Misunderstanding Pointer Arithmetic
          113. Referencing Nonexistent Variables
          114. Referencing Data in Free Heap Blocks
          115. Introducing Memory Leaks
      2. Physical and Virtual Addressing
        1. Memory management unit (MMU)
        2. Page table base register (PTBR)
        3. Address Translation
          1. Address translation with a page table.png
          2. Page hit.png
          3. Page fault.png
          4. Integrating Caches and VM.png
          5. Translation lookaside buffer (TLB)
          6. Components of a virtual address that are used to access the TLB.png
          7. Multi-level Page Tables
          8. This scheme reduces memory requirements
          9. If a PTE in the level-1 table is null that represents a significant potential savings
          10. only the level-1 table and the most heavily used level-2 page tables need to be cached in main memory
        4. Virtual address (VA)
          1. Virtual page offset (VPO)
          2. Virtual page number (VPN)
        5. Physical address (PA)
          1. Physical page number (PPN)
          2. Physical page offset (PPO)
      3. The Role of VM
        1. VM as a Tool for Caching
          1. Page
          2. virtual page (VP)
          3. Unallocated
          4. Cached
          5. Uncached
          6. physical page (PP)page frames
          7. DRAM SRAM
          8. Page Tables
          9. page table entry (PTE)
          10. Page Hits
          11. Page Faults
          12. swapping or paging
          13. swapped in (paged in)
          14. swapped out (paged out)
          15. demand paging
          16. Allocating Pages
          17. Locality
          18. working set or resident set
          19. thrashing
        2. VM as a Tool for Memory Management
          1. demand paging and separate virtual address spaces
          2. Simplifying sharing
          3. Simplifying loading
          4. Simplifying linking
          5. Simplifying memory allocation
        3. VM as a Tool for Memory Protection
          1. Permission bits
          2. SUP
          3. READ
          4. WRITE
          5. segmentation fault
          6. more bits for other process access
  4. Part III Interaction and Communication Between Programs
    1. System-Level I/O
      1. 【Better Explained】
        1. Benefits of understand Unix I/O
          1. understand other systems concepts
          2. Sometimes you have no choice but to use Unix I/O
          3. there are problems with the standard I/O library that make it risky to use for network programming.
        2. Unix file is a sequence of some bytes
        3. All I/O devices, such as networks, disks, and terminals, are modeled as files
        4. This elegant mapping allows kernel to export a simple, low-level API that enables all input and output to be performed in a uniform and consistent way
        5. R I/O
          1. W.Richard Stevens
      2. Unix I/O
        1. Opening and Closing Files
          1. descriptor
          2. Closing files
          3. free the data structures
          4. restore the descriptor
        2. Reading and Writing Files
          1. file position
          2. EOF
          3. There is no explicit “EOF character” at the end of a file
          4. read
          5. 0 EOF
        3. Reading File Metadata
          1. stat
        4. Sharing Files
          1. Descriptor table
          2. File table
          3. v-node table
        5. I/O Redirection
          1. dup2
        6. Standard I/O
          1. Stream is a pointer a structure of type FILE
          2. Full duplex
          3. Restriction 1:Input functions following output functions.
          4. Restriction 2:Output functions following input functions
          5. ?
    2. Network Programming
      1. The Client-Server Programming Model
      2. Networks
        1. WAN
          1. Router
          2. LAN
          3. Bridged Ethernet
          4. Some twisted pairs of wires
          5. Multiple Ethernet segments
          6. Ethernet segment
          7. Some twisted pairs of wires
          8. Hub
          9. Bridge
        2. Internet Protocol
          1. Naming scheme
          2. 48-bit address
          3. internet addresses
          4. Delivery mechanism
          5. Frame
          6. Header
          7. Playload
          8. Encapsulation
      3. The Global IP Internet
        1. IP Addresses
          1. scalar IP address in a structure
          2. dotted-decimal representation
          3. network byte order
          4. inet_aton
          5. inet_ntoa
        2. Internet Domain Names
          1. ICANN
          2. Internet domain name hierarchy
          3. HOSTS.txt
          4. DNS
          5. gethostbyname
          6. gethostbyaddr
        3. Internet Connections
          1. Socket Address Structures
          2. sockaddr
          3. SA
          4. Steven
          5. sockaddr_in
          6. in_addr
          7. The Sockets Interface
          8. socket
          9. bind
      4. Web Servers
        1. Web Basics
          1. HTTP
          2. HTML
          3. Hyperlinks
          4. WWW
          5. Tim Berners-Lee
          6. MIME
          7. URL
          8. ?
          9. &
        2. Web Content
          1. Fetch a disk file and return its contents to the client
          2. static content
          3. serving static content
          4. Run an executable file and return its output to the client
          5. dynamic content
          6. serving dynamic content
          7. /
        3. HTTP Transactions
    3. Concurrent Programming
      1. 【Better Explained】
        1. Application-level concurrency
          1. Accessing slow I/O devices
          2. Interacting with humans
          3. Reducing latency by deferring work
          4. Servicing multiple network clients
          5. Computing in parallel on multi-core machines
        2. concurrent programs
          1. Processes
          2. each logical control flow is a process that is scheduled and maintained by the kernel
          3. Since processes have separate virtual address spaces, communicating with each other flows by IPC
          4. I/O multiplexing
          5. applications explicitly schedule their own logical flows in the context of a single process
          6. Since the program is a single process, all flows share the same address space
          7. Logical flows are modeled as state machines that the main program explicitly transitions from state to state as a result of data arriving on file descriptors.
          8. Threads
          9. Threads are logical flows that run in the context of a single process and are scheduled by the kernel.
          10. scheduled by the kernel like process flows
          11. sharing the same virtual address space like I/O multiplexing flows
      2. Concurrent ProgrammingWith Processes
        1. Pros and Cons of Processes
          1. file tables are shared
          2. separate address spaces for processes
          3. Avoiding accidentally overite the virtual memory of another process
          4. make it more difficult for processes to share state information
          5. Slower because the overhead for process control and IPC is high.
        2. Traditional view of a process
          1. Process context
          2. Data registers
          3. Condition codes
          4. Stack pointer (SP)
          5. Program counter (PC)
          6. Kernel context
          7. Process ID (PID)
          8. VM structures
          9. Open files
          10. Signal handlers
          11. brk pointer
          12. Code, data, and stack
          13. stack
          14. shared libraries
          15. run-time heap
          16. read/write data
          17. read-only code/data
      3. Concurrent ProgrammingWith I/O Multiplexing
        1. Pros and Cons of I/O Multiplexing
          1. event-driven designs give programmers more control over the behavior of their programs than process-based designs
          2. I/O multiplexing runs in the context of a single process
          3. do not require a process context switch to schedule a new flow
          4. coding complexity
      4. Concurrent ProgrammingWith Threads
        1. 【Better Explained】
          1. main thread
          2. peer thread
          3. pool of peers
          4. The main impact of this notion of a pool of peers is that a thread can kill any of its peers, or wait for any of its peers to terminate.
          5. thread routine
          6. Posix Threads
          7. A joinable thread like a zombie process
          8. A joinable thread can be reaped and killed by other threads. Its memory resources (such as the stack) are not freed until it is reaped by another thread
          9. A detached thread cannot be reaped or killed by other threads. Its memory resources are freed automatically by the system when it terminates.
          10. Mapping Variables to Memory
          11. Global variables
          12. Local automatic variables
          13. Local static variables
        2. Threads Memory Model
          1. Thread 1 context:
          2. Data registers
          3. general-purpose registers
          4. Condition codes
          5. stack 1
          6. On my view ,the stack is the thread routine call stack,in other worlds ,it is just a ordinary function call stack.
          7. ????
          8. SP1
          9. PC1
          10. TID1
          11. Thread 2 context:
          12. Data registers
          13. general-purpose registers
          14. Condition codes
          15. stack 2
          16. SP2
          17. PC2
          18. TID2
          19. Shared code and data
          20. shared libraries
          21. run-time heap
          22. read/write data
          23. read-only code/data
          24. Kernel context:
          25. VM structures
          26. Open files
          27. Signal handlers
          28. brk pointer
          29. PID
        3. Synchronizing Threads with Semaphores
          1. Sequential Consistency
          2. Synchronization error
          3. Progress Graphs
          4. critical section
          5. safe trajectory
          6. unsafe trajectory
          7. Semaphores
          8. Edsger Dijkstra
          9. Proberen
          10. Verhogen
          11. Semaphore invariant
          12. Binary semaphore
          13. Mutex
          14. Counter semaphore
          15. Schedule Shared Resources
          16. Forbidden region
          17. Producer-consumer model
          18. Using Threads for Parallelism
          19. Strong scaling
          20. Weak scaling
        4. Thread-unsafe functions
          1. Failing to protect shared variables
          2. Semaphore
          3. it does not require any changes in the calling program
          4. the additional synchronization will slow down the program
          5. Relying on state across multiple function invocations
          6. The only way to make a function such as rand thread-safe is to rewrite it
          7. Returning a pointer to a static variable
          8. Rewrite it.
          9. Lock-and -copy
          10. the additional synchronization will slow down the program
          11. Such as gethostbyname return a pointer to a complex structure,if we want to copy the entire Hierarchy ,we need deep copy.
          12. In terms of the class of multiple invocations functions ,it doesn't work.
          13. Calling thread-unsafe functions
          14. Functions relying on multiple function invocations will be unsafe
          15. Functions that Involved shared variables and static variable can make it safe by using semaphore and lock-and-copy
        5. Reentrancy
          1. Reentrant function
          2. Reentrant functions are characterized by the property that they do not reference any shared data when they are called by multiple threads
          3. Reentrant functions are typically more efficient than non-reentrant thread-safe functions because they require no synchronization operations
          4. explicitly reentrant
          5. all function arguments are passed by value (i.e., no pointers)
          6. all data references are to local automatic stack variables (i.e., no references to static or global variables)
          7. Implicity reentrant
          8. Function arguments can be passed by reference (that is, we allow them to pass pointers)
          9. the calling threads must be careful to pass arguments that pointers to non-shared data
        6. Library Functions in Threaded Programs
          1. Rewrite
          2. Lock-and-copy-deep copy
        7. Race
          1. A race occurs when the correctness of a program depends on one thread reaching point x in its control flow before another thread reaches point y
          2. Golden rule
          3. Threaded programs must work correctly for any feasible trajectory
        8. Deadlock
          1. a collection of threads are blocked, waiting for a condition that will never be true
          2. Consistent lock will avoid the deadlock if you use mutex