Suche
Lesesoftware
Info / Kontakt
Cyberspace Mimic Defense - Generalized Robust Control and Endogenous Security
von: Jiangxing Wu
Springer-Verlag, 2019
ISBN: 9783030298449 , 770 Seiten
Format: PDF, Online Lesen
Kopierschutz: Wasserzeichen
Preis: 213,99 EUR
eBook anfordern
Preface
6
Author’s Profile
13
Brief Introduction (Abstract)
15
Preface
16
Acknowledgments
27
Contents
29
Abbreviations
40
Part I
48
Chapter 1: Security Risks from Vulnerabilities and Backdoors
49
1.1 Harmfulness of Vulnerabilities and Backdoors
49
1.1.1 Related Concepts
52
1.1.2 Basic Topics of Research
53
1.1.2.1 Accurate Definition of Vulnerability
53
1.1.2.2 Reasonable Classification of Vulnerabilities
54
1.1.2.3 Unpredictability of Vulnerabilities
55
1.1.2.4 Elimination of Vulnerabilities
56
1.1.3 Threats and Impacts
56
1.1.3.1 Broad Security Threats
56
1.2 Inevitability of Vulnerabilities and Backdoors
62
1.2.1 Unavoidable Vulnerabilities and Backdoors
63
1.2.1.1 The Contradiction Between Complexity and Verifiability
64
1.2.1.2 Challenges in Supply Chain Management
65
1.2.1.3 Inadequacy of Current Theories and Engineering Techniques
67
1.2.2 Contingency of Vulnerability Emergence
69
1.2.2.1 Contingent Time of Discovery
69
1.2.2.2 Contingent Form of Emergence
71
1.2.3 The Temporal and Spatial Characteristic of Cognition
72
1.2.3.1 From Quantitative Changes to Qualitative Changes
72
1.2.3.2 Absolute and Relative Interdependence and Conversion
73
1.2.3.3 The Unity of Specificity and Generality
74
1.3 The Challenge of Defense Against Vulnerabilities and Backdoors
75
1.3.1 Major Channels for Advanced Persistent Threat (APT) Attacks
75
1.3.2 Uncertain Unknown Threats
75
1.3.3 Limited Effect of Traditional “Containment and Repair”
77
1.3.3.1 Reduce the Introduction of Vulnerabilities into Software Development, but Oversights Are Inevitable
77
1.3.3.2 Discovering Vulnerabilities in the Testing Phase, but New Ones Are Emerging
78
1.3.3.3 Exploit Mitigation Measures Keep Improving, but the Confrontation Never Stops
79
1.3.3.4 The Careful Designing of White List Detection Mechanisms for System Protection Fails to Prevent Bypassing from Taking Place from Time to Time
80
1.4 Inspirations and Reflection
80
1.4.1 Building a System Based on “Contamination”
81
1.4.2 From Component Credibility to Structure Security
81
1.4.3 From Reducing Exploitability to Destroying Accessibility
81
1.4.4 Transforming the Problematic Scenarios
82
References
83
Chapter 2: Formal Description of Cyber Attacks
85
2.1 Formal Description Methods of Conventional Cyber Attacks
86
2.1.1 Attack Tree
86
2.1.2 Attack Graph
88
2.1.3 Analysis of Several Attack Models
90
2.2 The AS Theory
91
2.2.1 The AS Model
92
2.2.2 Defects in the AS Theory
94
2.3 The MAS
95
2.3.1 Definition and Nature of the MAS
95
2.3.2 MAS Implementation Methods
96
2.3.3 Limitations of the MAS
97
2.4 New Methods of Formal Description of Cyber Attacks
98
2.4.1 Cyber Attack Process
98
2.4.2 Formal Description of the Attack Graph
100
2.4.3 Formal Description of an Attack Chain
101
2.4.4 Vulnerability Analysis of Cyber Attack Chains
102
2.4.4.1 Conditions for the Successful Implementation of Atomic Attacks
103
2.4.4.2 The Conditions on Which the Successfully Completed Attack Chain Depends
106
References
111
Chapter 3: Conventional Defense Technologies
112
3.1 Static Defense Technology
112
3.1.1 Overview of Static Defense Technology
112
3.1.2 Analysis of Static Defense Technology
113
3.1.2.1 Firewall Technology
113
3.1.2.2 Intrusion Detection Technology
115
3.1.2.3 Intrusion Prevention Technology
117
3.1.2.4 Vulnerability Scanning Technology
119
3.2 Honeypot
121
3.2.1 Network Intrusion and Malicious Code Detection
122
3.2.2 Capturing Samples of Malicious Codes
123
3.2.3 Tracking and Analysis of Security Threats
124
3.2.4 Extraction of Attack Features
124
3.2.5 Limitations of Honeypot
125
3.3 Collaborative Defense
126
3.3.1 Collaborative Defense Between Intrusion Detection and Firewall
127
3.3.2 Collaborative Defense Between Intrusion Prevention and Firewall Systems
128
3.3.3 Collaborative Defense Between the Intrusion Prevention System and Intrusion Detection System
129
3.3.4 Collaborative Defense Between Intrusion Prevention and Vulnerability Scanning Systems
130
3.3.5 Collaborative Defense Between the Intrusion Prevention System and Honeypot
130
3.4 Intrusion Tolerance Technology
132
3.4.1 Technical Principles of Intrusion Tolerance
132
3.4.1.1 Theoretical Model
133
3.4.1.2 Mechanisms and Strategies
133
3.4.2 Two Typical Intrusion Tolerance Systems
136
3.4.2.1 Scalable Intrusion-Tolerant Architecture
136
3.4.2.2 Malicious and Accidental Fault Tolerance for Internet Applications [75]
137
3.4.3 Comparison of Web Intrusion Tolerance Architectures (Table 3.1)
139
3.4.4 Differences Between Intrusion Tolerance and Fault Tolerance
140
3.5 Sandbox Acting as an Isolation Defense
142
3.5.1 Overview of Sandbox
142
3.5.2 Theoretical Principles of Sandbox
144
3.5.2.1 Application Layer Sandbox
144
3.5.2.2 Kernel Layer Sandbox
145
3.5.2.3 Hybrid Sandbox
145
3.5.3 Status Quo of Sandbox Defense Technology
145
3.6 Computer Immune Technology
147
3.6.1 Overview of Immune Technology
147
3.6.2 Artificial Immune System Status
148
3.7 Review of Conventional Defense Methods
151
References
154
Chapter 4: New Approaches to Cyber Defense
157
4.1 New Developments in Cyber Defense Technologies
157
4.2 Trusted Computing
160
4.2.1 Basic Thinking Behind Trusted Computing
160
4.2.2 Technological Approaches of Trusted Computing
161
4.2.2.1 Root of Trust
161
4.2.2.2 Trust Measurement Model and Chain of Trust
162
4.2.2.3 Trusted Computing Platform (TCP)
164
4.2.3 New Developments in Trusted Computing
167
4.2.3.1 Trusted Computing 3.0
167
4.2.3.2 Trusted Cloud
169
4.2.3.3 SGX Architecture
171
4.3 Tailored Trustworthy Spaces
173
4.3.1 Preconditions
174
4.3.1.1 Communication
174
4.3.1.2 Computing
174
4.3.1.3 Security
176
4.3.1.4 Summary
176
4.3.2 Tailored Trustworthy Spaces (TTS)
177
4.3.2.1 Features Research
177
4.3.2.2 Trust Negotiation
178
4.3.2.3 Set of Operations
178
4.3.2.4 Privacy
178
4.4 Mobile Target Defense
179
4.4.1 MTD Mechanism
180
4.4.1.1 Randomization
180
4.4.1.2 Diversification Mechanism
181
4.4.1.3 Dynamic Mechanism
181
4.4.1.4 Symbiotic Mechanism
182
4.4.2 Roadmap and Challenges of MTD
182
4.5 Blockchain
183
4.5.1 Basic Concept
184
4.5.2 Core Technologies
185
4.5.3 Analysis of Blockchain Security
187
4.6 Zero Trust Security Model
188
4.6.1 Basic Concept
189
4.6.2 Forrrester’s Zero Trust Security Framework
190
4.6.3 Google’s Solution
191
4.6.3.1 Principles of Identifying Security Devices
192
4.6.3.2 Principles for Identifying Users’ Security
193
4.6.3.3 Removing Trust from the Network
193
4.6.3.4 Externalizing Applications and Workflows
193
4.6.3.5 Implementing Inventory-Based Access Control
194
4.7 Reflections on New Cyber Defense Technologies
194
References
199
Chapter 5: Analysis on Diversity, Randomness, and Dynameicity
202
5.1 Diversity
203
5.1.1 Overview
203
5.1.2 Diversity of the Executors
204
5.1.2.1 Executor Diversity in Network Operating Systems
205
5.1.2.2 Executor Diversity in the Path
205
5.1.3 Diversity of the Execution Space
208
5.1.3.1 Execution Space Diversity in Network Operating Systems
208
5.1.3.2 Execution Space Diversity in the Path
211
5.1.4 Differences Between Diversity and Pluralism
212
5.2 Randomness
213
5.2.1 Overview
213
5.2.2 Address Space Randomization
214
5.2.3 Instruction System Randomization
216
5.2.4 Kernel Data Randomization
218
5.2.5 Cost of Introduction
220
5.2.5.1 Different Software and Hardware Versions Require Different Expert Teams to Design and Maintain
220
5.2.5.2 The Cost Will Inevitably Increase if a Multi-version Service System Is Constructed
222
5.2.5.3 Introduction of Diversity Makes Multi-version Synchronized Updating a New Challenge
223
5.3 Dynamicity
224
5.3.1 Overview
224
5.3.1.1 Resource Redundancy Configuration
226
5.3.1.2 Cost of Randomness
227
5.3.1.3 Cost of Effectiveness
228
5.3.2 Dynamic Defense Technology
228
5.3.2.1 Dynamic Network
229
5.3.2.2 Dynamic Platform
231
5.3.2.3 Dynamic Software
233
5.3.2.4 Dynamic Data
235
5.3.3 Dynamicity Challenges
236
5.4 Case of OS Diversity Analysis
237
5.4.1 Statistical Analysis Data Based on the NVD
238
5.4.2 Common OS Vulnerabilities
239
5.4.3 Conclusions
243
5.5 Chapter Summary
245
References
247
Chapter 6: Revelation of the Heterogeneous Redundancy Architecture
249
6.1 Introduction
249
6.2 Addressing the Challenge of Uncertain Failures
251
6.2.1 Proposal of the Problem
251
6.2.2 Enlightenment from TRA
252
6.2.3 Formal Description of TRA
254
6.3 The Role of Redundancy and Heterogeneous Redundancy
256
6.3.1 Redundancy and Fault Tolerance
256
6.3.2 Endogenous Functions and Structural Effects
258
6.3.3 Redundancy and Situational Awareness
258
6.3.4 From Isomorphism to Heterogeneity
259
6.3.4.1 Isomorphic Redundancy
259
6.3.4.2 Heterogeneous Redundancy
260
6.3.4.3 Appropriate Functional Intersections
261
6.3.5 Relationship Between Fault Tolerance and Intrusion Tolerance
262
6.4 Voting and Ruling
263
6.4.1 Majority Voting and Consensus Mechanism
263
6.4.2 Multimode Ruling
264
6.5 Dissimilar Redundancy Structure
265
6.5.1 Analysis of the Intrusion Tolerance Properties of the DRS
269
6.5.2 Summary of the Endogenous Security Effects of the DRS
273
6.5.3 Hierarchical Effect of Heterogeneous Redundancy
274
6.5.4 Systematic Fingerprint and Tunnel-Through
276
6.5.5 Robust Control and General Uncertain Disturbances
277
6.6 Anti-attack Modeling
281
6.6.1 The GSPN Model
282
6.6.2 Anti-attack Considerations
283
6.6.3 Anti-attack Modeling
286
6.7 Anti-aggression Analysis
288
6.7.1 Anti-general Attack Analysis
288
6.7.1.1 Non-redundant System
288
6.7.1.2 Dissimilar Redundant System
291
6.7.2 Anti-special Attack Analysis
300
6.7.2.1 Non-redundant System
300
6.7.2.2 Dissimilar Redundant System
301
6.7.3 Summary of the Anti-attack Analysis
306
6.8 Conclusion
308
6.8.1 Conditional Awareness of Uncertain Threats
308
6.8.2 New Connotations of General Robust Control
308
6.8.3 DRS Intrusion Tolerance Defect
309
6.8.4 DRS Transformation Proposals
311
References
313
Chapter 7: DHR Architecture
314
7.1 Dynamic Heterogeneous Redundant Architecture
315
7.1.1 Basic Principles of DHRA
316
7.1.1.1 Assumed Conditions
316
7.1.1.2 Composition and Functions
317
7.1.1.3 Core Mechanism
319
7.1.1.4 Robust Control and Problem Avoidance
320
7.1.1.5 Iterative Convergence
321
7.1.2 Goals and Effects of DHR
321
7.1.2.1 Killing Four Birds with One Stone
322
7.1.2.2 Dynamic Variability of the Apparent Structure
322
7.1.2.3 Equivalent to TRA with the Superposed-State Authentication Function
323
7.1.2.4 Metastable Scenarios and DRS Isomorphism
324
7.1.2.5 The Uncertainty Attribute
325
7.1.2.6 Coding Theory and Security Measurement
325
7.1.2.7 Endogenous Security Mechanism and Integrated Defense
326
7.1.2.8 Problem Avoidance and Problem Zeroing
327
7.1.3 Typical DHR Architecture
328
7.1.4 Atypical DHR Architecture
332
7.2 The Attack Surface of DHR
334
7.3 Functionality and Effectiveness
336
7.3.1 Creating a Cognition Dilemma for the Target Object
336
7.3.2 DFI to Present Uncertainty
337
7.3.3 Making It Difficult to Exploit the Loopholes of the Target Object
337
7.3.4 Increasing the Uncertainty for an Attack Chain
338
7.3.5 Increasing the Difficulty for MR Escape
339
7.3.6 Independent Security Gain
340
7.3.7 Strong Correlation Between the Vulnerability Value and the Environment
340
7.3.8 Making It Difficult to Create a Multi-target Attack Sequence
341
7.3.9 Measurable Generalized Dynamization
342
7.3.10 Weakening the Impact of Homologous Backdoors
342
7.4 Reflections on the Issues Concerned
343
7.4.1 Addressing Uncertain Threats with Endogenous Mechanisms
343
7.4.2 Reliability and Credibility Guaranteed by the Structural Gain
345
7.4.3 New Security-Trustable Methods and Approaches
345
7.4.4 Creating a New Demand in a Diversified Market
346
7.4.5 The Problem of Super Escape and Information Leaking
347
7.5 Uncertainty: An Influencing Factor
348
7.5.1 DHR Endogenous Factors
348
7.5.2 DHR-Introduced Factors
351
7.5.3 DHR-Combined Factors
351
7.5.4 Challenges to a Forced Breakthrough
352
7.6 Analogical Analysis Based on the Coding Theory
353
7.6.1 Coding Theory and Turbo Codes
353
7.6.2 Analogic Analysis Based on Turbo Encoding
356
7.6.2.1 Coding Heterogeneity
357
7.6.2.2 Coding Redundancy
359
7.6.2.3 Coding OV
361
7.6.2.4 Decoding and Ruling
361
7.6.2.5 Codec Dynamics
364
7.6.3 Some Insights
367
7.6.3.1 Randomness and Redundancy Serving as the Core Elements for Solving Cyberspace Security Problems
367
7.6.3.2 Uncertainty Effect Brought by DHRA
367
7.6.3.3 Flexibility and Self-restoring Capability of DHRA
368
7.6.3.4 Insufficiency in Analogical Analysis Using the Turbo Code Model
368
7.7 DHR-Related Effects
369
7.7.1 Ability to Perceive Unidentified Threats
369
7.7.2 Distributed Environmental Effect
369
7.7.3 Integrated Effect
370
7.7.4 Architecture-Determined Safety
370
7.7.5 Changing the Attack and Defense Game Rules in Cyberspace
371
7.7.6 Creating a Loose Ecological Environment
372
7.7.6.1 “Isomeric and Diversified” Ecology
373
7.7.6.2 New Ways to Accelerate Product Maturity
373
7.7.6.3 Self-controllable Complementary Form
373
7.7.6.4 Creating an Integrated Operating Environment
374
7.7.7 Restricted Application
374
7.7.7.1 Micro-synchronous Low-Time-Delay Operating Environment
375
7.7.7.2 Time-Delay-Constrained Scenarios That Cannot Be Corrected
375
7.7.7.3 Lack of a Normalizable Input/Output Interface
375
7.7.7.4 Lack of Heterogeneous Hardware/Software Resources
376
7.7.7.5 “Blackout” in Software Update
376
7.7.7.6 Cost-Sensitive Area
376
7.7.7.7 Concerns Regarding the Highly Robust Software Architecture
377
7.7.7.8 Issue of Ruling
377
References
378
Part II
379
Chapter 8: Original Meaning and Vision of Mimic Defense
380
8.1 Mimic Disguise and Mimic Defense
380
8.1.1 Biological Mimicry
380
8.1.2 Mimic Disguise
382
8.1.3 Two Basic Security Problems and Two Severe Challenges
384
8.1.4 An Entry Point: The Vulnerability of an Attack Chain
386
8.1.5 Build the Mimic Defense
387
8.1.6 Original Meaning of Mimic Defense
391
8.2 Mimic Computing and Endogenous Security
393
8.2.1 The Plight of HPC Power Consumption
393
8.2.2 Original Purpose of Mimic Calculation
394
8.2.3 Vision of Mimic Calculation
395
8.2.4 Variable Structure Calculation and Endogenous Security
399
8.3 Vision of Mimic Defense
400
8.3.1 Reversing the Easy-to-Attack and Hard-to-Defend Status
401
8.3.2 A Universal Structure and Mechanism
403
8.3.3 Separation of Robust Control and Service Functions
403
8.3.4 Unknown Threat Perception
404
8.3.5 A Diversified Eco-environment
405
8.3.6 Achievement of Multi-dimensional Goals
406
8.3.7 Reduce the Complexity of Security Maintenance
407
References
408
Chapter 9: The Principle of Cyberspace Mimic Defense
409
9.1 Overview
409
9.1.1 Core Ideology
410
9.1.2 Eradicating the Root Cause for Cyber Security Problems
411
9.1.3 Biological Immunity and Endogenous Security
412
9.1.3.1 Non-specific Immunity
413
9.1.3.2 Specific Immunity
414
9.1.3.3 Non-prior-Knowledge-Reliant Defense
415
9.1.3.4 Endogenous Security
415
9.1.4 Non-specific Surface Defense
417
9.1.5 Integrated Defense
417
9.1.6 GRC and the Mimic Structure
418
9.1.7 Goals and Expectations
419
9.1.7.1 Development Goals
419
9.1.7.2 Technical Expectations
423
9.1.8 Potential Application Targets
424
9.2 Cyberspace Mimic Defense
426
9.2.1 Underlying Theories and Basic Principles
428
9.2.1.1 FE Common Sense and the TRA
431
9.2.1.2 DHR Architecture
431
9.2.1.3 Security Effects Brought About by Endogenous Mechanisms
433
9.2.2 Mimic Defense System
434
9.2.2.1 The Main Concepts and Core Mechanisms of CMD
436
9.2.2.2 CMD Model
448
9.2.3 Basic Features and Core Processes
449
9.2.4 Connotation and Extension Technologies
455
9.2.4.1 Connotation Technologies
455
9.2.4.2 Extension Technologies
456
9.2.5 Summary and Induction
457
9.2.6 Discussions of the Related Issues
459
9.2.6.1 CMD Level Based on the Attack Effect
459
9.2.6.2 Measurement Based on Reliability Theories and Test Methods
460
9.2.6.3 Security Situation Monitoring of the Target System
461
9.2.6.4 Contrast Verification
462
9.2.6.5 Information Security Effect
462
9.2.6.6 Mimic Defense and Mimic Computation
463
9.2.6.7 Unknown Threat Detection Devices
464
9.2.6.8 “Halt-Restart” Bumps
465
9.2.6.9 Standby Cooperative Attacks and External Command Disturbances
465
9.2.6.10 Superimposable and Iterative
466
9.2.6.11 Granularity of the Target Object
466
9.2.6.12 Natural Scenarios of DHR
467
9.2.6.13 About the Side Channel Attack
467
9.3 Structural Representation and Mimic Scenarios
468
9.3.1 Uncertain Characterization of the Structure
468
9.3.2 Mimic Scenario Creation
470
9.3.3 Typical Mimic Scenarios
471
9.4 Mimic Display
473
9.4.1 Typical Modes of Mimic Display
473
9.4.2 Considerations of the MB Credibility
476
9.5 Anti-attack and Reliability Analysis
478
9.5.1 Overview
478
9.5.2 Anti-attack and Reliability Models
479
9.5.3 Anti-attack Analysis
483
9.5.3.1 Analysis of CMD’s Resistance Against the General DM/CM Attacks
501
9.5.3.2 Anti-special Attack Analysis of the CMD System
505
9.5.3.3 Summary of the Anti-attack Analysis
514
9.5.4 Reliability Analysis
518
9.5.5 Conclusion
525
9.6 Differences Between CMD and HIT (Heterogeneous Intrusion Tolerance)
526
9.6.1 Major Differences
526
9.6.2 Prerequisites and Functional Differences
528
9.6.3 Summary
529
References
530
Chapter 10: Engineering and Implementation of Mimic Defense
532
10.1 Basic Conditions and Constraints
532
10.1.1 Basic Conditions
532
10.1.2 Constraints
533
10.2 Main Realization Mechanisms
534
10.2.1 Structural Effect and Functional Convergence Mechanism
535
10.2.2 One-Way or Unidirectional Connection Mechanism
535
10.2.3 Policy and Schedule Mechanism
536
10.2.4 Mimic Ruling Mechanism
537
10.2.5 Negative Feedback Control Mechanism
537
10.2.6 Input Allocation and Adaptation Mechanism
538
10.2.7 Output Agency and Normalization Mechanism
538
10.2.8 Sharding/Fragmentation Mechanism
539
10.2.9 Randomization/Dynamization/Diversity Mechanism
539
10.2.10 Virtualization Mechanism
540
10.2.11 Iteration and Superposition Mechanism
541
10.2.12 Software Fault Tolerance Mechanism
542
10.2.13 Dissimilarity Mechanism
543
10.2.14 Reconfiguration Mechanism
544
10.2.15 Executor’s Cleaning and Recovery Mechanism
544
10.2.16 Diversified Compilation Mechanism
546
10.2.17 Mimic Structure Programming
547
10.3 Major Challenges to Engineering Implementation
548
10.3.1 Best Match of Function Intersection
548
10.3.2 Complexity of Multimode Ruling
549
10.3.3 Service Turbulence
550
10.3.4 The Use of Open Elements
551
10.3.5 Execution Efficiency of Mimic Software
552
10.3.6 Diversification of Application Programs
553
10.3.7 Mimic Defense Interface Configuration
555
10.3.7.1 Route Forwarding Based on Mimic Defense
555
10.3.7.2 Mimic Defense-Based Web Access Server
555
10.3.7.3 File Storage System Based on Mimic Defense
556
10.3.7.4 Mimic Defense-Based Domain Name Resolution
556
10.3.7.5 Mimic Defense-Based Gun Control System
557
10.3.8 Version Update
557
10.3.9 Loading of Non-cross-Platform Application
558
10.3.10 Re-synchronization and Environment Reconstruction
559
10.3.11 Simplifying Complexity of Heterogeneous Redundancy Realization
560
10.3.11.1 Commercial Obstacles to Heterogeneous Redundancy
560
10.3.11.2 Locking the Robustness of the Service
561
10.3.11.3 Achieving Layered Heterogeneous Redundancy
562
10.3.11.4 SGX and the Protection of Heterogeneous Redundant Code and Data
562
10.3.11.5 Avoiding the “Absolutely Trustworthy” Trap of SGX
563
10.4 Testing and Evaluation of Mimic Defense
564
10.4.1 Analysis of Mimic Defense Effects
564
10.4.1.1 Definitive Defense Effect Within the Interface
564
10.4.1.2 Uncertain Defense Effect on or Outside the Interface
565
10.4.1.3 Uncertain Defense Effect Against Front Door Problems
565
10.4.1.4 Uncertain Social Engineering Effects
566
10.4.2 Reference Perimeter of Mimic Defense Effects
567
10.4.2.1 Ideal Effects of Mimic Defense
568
10.4.2.2 Reference Range of Defense Effect
568
10.4.3 Factors to Be Considered in Mimic Defense Verification and Test
570
10.4.3.1 Background of Testing
571
10.4.3.2 Principles of Testing
572
10.4.3.3 Major Testing Indicators
574
10.4.3.4 Considerations of Test Methods
579
10.4.3.5 Qualitative Analysis of Defense Effectiveness
582
10.4.4 Reflections on Quasi-stealth Evaluation
582
10.4.5 Mimic Ruling-Based Measurable Review
583
10.4.6 Mimic Defense Benchmark Function Experiment
585
10.4.7 Attackers’ Perspective
593
10.4.7.1 Mining or Setting up Vulnerabilities/Backdoors in the Mimic Interface
593
10.4.7.2 Creating a Homologous Ecosystem with the Development Tools and the Open-Source Community Model
594
10.4.7.3 Black-Box Operations Using “Irreplaceable” Advantage
594
10.4.7.4 Developing Attack Codes that Are Not Dependent on the Environment
594
10.4.7.5 Coordinated Operation Under Non-cooperative Conditions Using Input Sequence
595
10.4.7.6 Trying to Bypass the Mimic Interface
595
10.4.7.7 Attacking the Mimic Control Aspect
595
10.4.7.8 DDoS Brute Force Attacks
596
10.4.7.9 Social Engineering-Based Attacks
596
10.4.7.10 Directly Cracking Access Command or Password
596
References
597
Chapter 11: Foundation and Cost of Mimic Defense
598
11.1 Foundation for Mimic Defense Realization
598
11.1.1 Era of Weak Correlation of Complexity to Cost
598
11.1.2 High Efficiency Computing and Heterogeneous Computing
599
11.1.3 Diversified Ecological Environment
601
11.1.4 Standardization and Open Architecture
602
11.1.5 Virtualization Technology
603
11.1.6 Reconfiguration and Reorganization
604
11.1.7 Distributed and Cloud Computing Service
605
11.1.8 Dynamic Scheduling
607
11.1.9 Feedback Control
608
11.1.10 Quasi-Trusted Computing
608
11.1.11 Robust Control
609
11.1.12 New Developments of System Structure Technologies
609
11.2 Analysis of Traditional Technology Compatibility
610
11.2.1 Naturally Accepting Traditional Security Technologies
610
11.2.2 Naturally Carrying Forward the Hardware Technological Advances
612
11.2.3 Strong Correlation to Software Technological Development
613
11.2.4 Depending on the Open and Plural Ecological Environment
613
11.3 Cost of Mimic Defense Implementation
613
11.3.1 Cost of Dynamicity
614
11.3.2 Cost of Heterogeneity
614
11.3.3 Cost of Redundancy
616
11.3.4 Cost of Cleanup and Reconfiguration
616
11.3.5 Cost of Virtualization
617
11.3.6 Cost of Synchronization
617
11.3.7 Cost of Ruling
618
11.3.7.1 Synchronous Judgment
619
11.3.7.2 Agreed Output
619
11.3.7.3 First Come, First Output
619
11.3.7.4 Regular Judgment
619
11.3.7.5 Mask Decision
620
11.3.7.6 Normalized Pretreatment
620
11.3.8 Cost of Input/Output Agency
620
11.3.9 Cost of One-Way Connection
621
11.4 Scientific and Technological Issues to Be Studied and Solved
622
11.4.1 Scientific Issues Needing Urgent Study in the CMD Field
622
11.4.2 Engineering and Technical Issues Needing Urgent Solution in the CMD Field
623
11.4.2.1 Dissimilarity Design and Screening Theory
623
11.4.2.2 Pluralistic and Diversified Engineering Issues
624
11.4.2.3 Assessing the Security Impact of the “Homologous” Component Vulnerability on the DHR Architecture
625
11.4.2.4 How to Establish a System Design Reference Model
625
11.4.2.5 How to Prevent Standby Attacks
625
11.4.2.6 Mimic Ruling
626
11.4.2.7 Protection of the Mimic Control
627
11.4.2.8 Mimic Structural Design Technology
628
11.4.2.9 Mimic Construction Implementation Technology
629
11.4.3 Defense Effect Test and Evaluation
630
11.4.4 Comprehensive Use of Defense Capability
631
11.4.5 Issues Needing Continuous Attention
632
11.4.6 Emphasizing the Natural and Inspired Solutions
632
References
633
Chapter 12: Examples of Mimic Defense Application
634
12.1 Mimic Router Verification System
634
12.1.1 Threat Design
634
12.1.2 Designing Idea
635
12.1.3 DHR-Based Router Mimic Defense Model
637
12.1.4 System Architecture Design
639
12.1.4.1 Overall Framework
639
12.1.4.2 Function Unit Design
640
12.1.5 Mimic Transformation of the Existing Network
645
12.1.6 Feasibility and Security Analysis
646
12.2 Network Storage Verification System
647
12.2.1 Overall Plan
647
12.2.2 Arbiter
649
12.2.3 Metadata Server Cluster
650
12.2.4 Distributed Data Server
650
12.2.5 The Client
651
12.2.6 System Security Test and Result Analysis
652
12.3 Mimic-Structured Web Server Verification System
654
12.3.1 Threat Analysis
654
12.3.2 Designing Idea
655
12.3.3 System Architecture Design
656
12.3.4 Functional Unit Design
658
12.3.4.1 Request Dispatching and Balancing (RDB) Module
658
12.3.4.2 Dissimilar Redundant Response Voter
660
12.3.4.3 Dynamically Executing Scheduler
660
12.3.4.4 Dissimilar Virtual Web Server Pool
662
12.3.4.5 Primary Controller
662
12.3.4.6 Database Instruction Labelling (DIL) Module
663
12.3.5 Prototype Design and Realization
665
12.3.6 Attack Difficulty Evaluation
666
12.3.7 Cost Analysis
671
12.4 Cloud Computing and Virtualization Mimic Construction
671
12.4.1 Basic Layers of Cloud Computing
672
12.4.2 Cloud Computing Architecture Layers
672
12.4.3 Virtualized DHR Construction
674
12.5 Application Consideration for Software Design
675
12.5.1 Effect of Randomly Invoking Mobile Attack Surface
676
12.5.2 Guard Against Hidden Security Threats from Third Parties
676
12.5.3 Typical Mimic Defense Effects
676
12.6 Commonality Induction of System-Level Applications
677
References
677
Chapter 13: Testing and Evaluation of the Mimic Defense Principle Verification System
679
13.1 Mimic Defense Principle Verification in the Router Environment
680
13.1.1 Design of Test Methods for Mimic-Structured Routers
680
13.1.2 Basic Router Function and Performance Test
682
13.1.2.1 Routing Protocol Functional Test
682
13.1.2.2 Forwarding Performance Comparison Test
683
13.1.3 Test of the Mimic Defense Mechanism and Result Analysis
684
13.1.3.1 Data Transformation Function Test
684
13.1.3.2 Data Stream Fingerprint Function Test
686
13.1.3.3 Protocol Executor Random Display Test
687
13.1.3.4 Protocol Executor Routing Abnormity Monitoring and Handling Test
688
13.1.3.5 Endogenous Flow Interception Test
689
13.1.4 Defense Effect Test and Result Analysis
690
13.1.4.1 Attack Models and Testing Scenarios
691
13.1.4.2 System Information Scanning Test
691
13.1.4.3 Mimic Interface Vulnerability Detection Test
693
13.1.4.4 Test of Difficulty in Vulnerability Exploitation Within Mimic Interface
693
13.1.4.5 Test of Difficulty in Utilizing Backdoors in the Mimic Interface
695
13.1.5 Test Summary of Mimic-Structured Router
698
13.2 Mimic Defense Principle Verification in the Web Server Environment
698
13.2.1 Design of Test Methods for Mimic-Structured Web Servers
698
13.2.1.1 Test Process Design
699
13.2.1.2 Test Environment Setting
700
13.2.2 Basic Functional Test and Compatibility Test for Web Servers
700
13.2.2.1 HTTP Protocol Function Test
701
13.2.2.2 Page Compatibility Comparison Test
702
13.2.3 Mimic Defense Mechanism Test and Result Analysis
703
13.2.4 Defense Effect Test and Result Analysis
704
13.2.4.1 Scanning Detection Test
704
13.2.4.2 Operating System Security Test
704
13.2.4.3 Data Security Test
707
13.2.4.4 Anti-Trojan Test
707
13.2.4.5 Web Application Attack Test
710
13.2.5 Web Server Performance Test
710
13.2.5.1 Benchmark Web Server Performance Testing
712
13.2.5.2 DIL Module Performance Test
713
13.2.5.3 System Overall Performance Test
713
13.2.6 Summary of the Web Principle Verification System Test
714
13.3 Test Conclusions and Prospects
714
References
717
Chapter 14: Application Demonstration and Current Network Testing of Mimic Defense
718
14.1 Overview
718
14.2 Application Demonstration of the Mimic-Structured Router
719
14.2.1 Status Quo of the Pilot Network
720
14.2.1.1 Threat Analysis
720
14.2.1.2 Application Scenario
720
14.2.1.3 Product Plan
721
14.2.1.4 Application Deployment
724
14.2.1.5 Cost Analysis
726
14.2.1.6 Application Outcome
728
14.2.2 Current Network Testing
728
14.2.2.1 Testing Purpose
728
14.2.2.2 Testing Plan
728
14.2.2.3 Testing and Evaluation Items
729
14.2.2.4 Current Network Testing
729
14.2.2.5 Testing and Evaluation
731
14.3 Mimic-Structured Web Server
731
14.3.1 Application Demonstration
731
14.3.1.1 Application of the Mimic-Structured Web Server (MSWS) in a Financial Enterprise
731
14.3.1.2 Application of the MSWS on a Government Website
734
14.3.1.3 Application of the Mimic-Structured Web Virtual Host (MSWVH) in Gianet Fast Cloud (GFC)
740
14.3.2 Current Network Testing
745
14.3.2.1 Testing of the MSWS
745
14.3.2.2 Testing of the MSWVH
752
14.4 Mimic-Structured Domain Name Server (MSDN Server)
756
14.4.1 Application Demonstration
756
14.4.1.1 Threat Analysis
756
14.4.1.2 Application Scenario
758
14.4.1.3 Product Plan
758
14.4.1.4 Application Deployment
761
14.4.1.5 Cost Analysis
762
14.4.1.6 Application Effect
763
14.4.2 Testing and Evaluation
764
14.4.2.1 CUHN
764
14.4.2.2 Gianet
767
14.5 Conclusions and Prospects
769