Cryptography And Security Technology

admin 68 0

Other technologies

The field of cryptography involves many techniques. Below is a summary of some topics that are still under development and discussion.

Zero knowledge proof

Zero-knowledge proof (Zero Proof) is a process in which the prover convinces the verifier that a certain assertion () is correct without providing any additional information to the verifier.

The proof process includes interactive () and non-interactive (Non-).

Research on zero-knowledge proofs began with Shafi and his seminal paper "The of Proof-" submitted in 1985, for which the three authors won the first Gödel Prize in 1993.

The paper proposes that zero-knowledge proof must meet three conditions:

Interactive zero-knowledge proofs are relatively easy to construct and require a series of interactions between the prover and verifier. Typically, a validator is asked a series of questions. If the prover can answer them all correctly, then the verifier has a good chance of knowing the conclusion.

For example, verifier Alice proves to verifier Bob that two pictures that look the same are different and that she can recognize the difference. Bob replaces or maintains the order of the two pictures without Alice seeing it, and lets Alice identify again whether the order has been adjusted. If Alice can correctly identify whether the order has changed every time, Bob will approve Alice's proof with a high probability. During this process, Bob cannot obtain or infer any additional information (including the difference itself), other than that he knows that Alice can indeed recognize the difference, nor can he use Alice's proof (such as a video of the proof process) to prove that it gives other people. Note that during this process, if Alice guesses Bob's replacement order in advance, there is a possibility of fraud.

Non-interactive zero-knowledge proofs (NIZK) are much more complex. In fact, a universal non-interactive perfect or probabilistic zero-knowledge proof (Proof) system does not exist, but a computationally secure non-interactive zero-knowledge proof (Proof) system can be designed with wide application value.

In the paper "Zero-" published in 1991, Blum, De and He proposed the first non-interactive perfect zero-knowledge proof (NIPZK) system for the "quadratic discontinuity problem".

In 2012, Nir, Ran and others proposed a practical non-interactive zero-knowledge argument scheme zk- in the paper "From to Not and Back Again", which was later widely used in projects such as Z-cash. Currently, the main idea of ​​non-interactive zero-knowledge arguments is to create a hard problem (usually an NP-complete problem such as SAT, which in some cases requires random numbers as parameters provided in advance or by a third party) using proven assertions. If the prover does know the assertion, the problem can be solved within a certain time, otherwise the problem would be difficult to solve. The verifier can verify that the prover knows the assertion by verifying the answer.

If zero-knowledge proof is to be popularized, it still needs practical testing. Additionally, considerations need to be made to reduce or even eliminate the need for preparation stage computations, improve scalability, and consider resistance to quantum computing attacks.

verifiable random function

Verifiable Random Functions (VRF) were first proposed by MIT (Massachusetts Institute of Technology), Harvard University (Harvard University), and Salil Vadha (Massachusetts Institute of Technology) in the paper "Verifiable Random Functions" in 1999.

It discusses a special type of pseudorandom function whose results can be verified in certain scenarios. Generally it can be constructed through signature and hash operations.

For example, Alice has the public key Pk and the corresponding private key Sk. Alice declares some verifiable random function F and input x, and computes y = F(Sk, x). Bob can use Alice's public key Pk to verify the same x and F and prove that the result is indeed y. Note that in this process, no one can predict the value of y due to the randomness of F.

It can be seen that VRF provides a random sequence that is recognized by everyone and can be verified, and can be used in voting scenarios in distributed systems.

Secure multi-party computation

Secure multi-party computation (Multi-Party, SMPC or SMC) was proposed by Chi-Chih Yao in 1986 in the paper "How to and". The hypothetical scenario is that multiple participants own part of the data and can use data from multiple parties for calculations without revealing their own data. This problem is useful when multiple parties do not trust each other and require some cooperation (such as data collaboration while protecting privacy). It is sometimes called privacy-preserving computing(-).

According to the number of participants, it can be divided into two-party calculation or multi-party calculation; according to the implementation method, it can be divided into based on noise (such as differential privacy), based on secret sharing ( ), based on confusion circuit ( ) and based on homomorphic encryption ( ).

Generally speaking, noise-based solutions are easy to implement, but their use cases are limited. Solutions based on cryptographic techniques are more general, but often require greater computational cost.

unintentional transmission

The oblivious transfer (OT) protocol was proposed by S. Even, O., A. et al. 1983 in the paper "A for". It was later applied to scenarios such as secure multi-party computation.

The problem solved by this protocol is that the sender sends information to the receiver, but the privacy of both parties must be protected: the sender cannot know what information the receiver finally received; the receiver cannot obtain other information except the information he needs. information.

For example, a bank checks a customer's credit status with a credit reporting company to decide whether to apply for a loan. The bank does not want the credit reporting company to know that the customer came to the bank to apply for a loan (it belongs to the customer's privacy). At the same time, credit reporting companies do not want banks to know that customers come to the bank to apply for loans. Obtain credit data of other customers.

A simple implementation is that the sender sends N public keys to the receiver, and the receiver randomly selects a public key to encrypt the symmetric key and returns the result to the sender. The sender uses N private keys to decrypt, one of which is a symmetric key and N-1 is a random string, but the sender cannot tell the difference. The sender uses N decryption results to encrypt one message each (N messages in total) and returns them to the receiver. The receiver can only decrypt 1 of the messages. If both parties agree on N random strings in advance, which will result in blinded results, the recipient can also choose to only receive specific messages.

Another simple implementation is for the receiver to provide N random strings as public keys and prove that it only knows the private keys corresponding to K strings. The sender encrypts N messages using these random strings and then sends them to the receiver. In this way, the receiver can only decipher K messages.

Differentiated privacy

Differential privacy () is a more practical privacy protection mechanism.

The initial problem was to study how to share data sets while protecting the privacy of the individuals in the data sets. It was proposed by Dwork, Frank, Kobbi and Adam Smith in the paper "Noise to in Data" in 2006. Due to its application prospects in privacy protection issues, it has become a research hotspot in recent years.

The main idea is to subtly add some noise perturbation to the data set so that the calculation of the modified data set does not affect the statistical results, but it is impossible to trace any individual in it back to the original individual information. For example, if any records are deleted from the dataset, the query results will not be affected.

Currently feasible solutions mainly include adding Laplacian noise (suitable for numerical types) and exponential noise (suitable for non-numeric types).

Quantum cryptography

With the research on quantum computing and quantum communication, quantum cryptography ( ) has received more and more attention and is considered to have a greater impact on existing cryptographic security mechanisms.

The concept of quantum computing was first proposed by physicist Feynman in 1981. The basic principle is that qubits can be in multiple coherent superposition states at the same time. In theory, a large amount of information can be expressed simultaneously with a small number of qubits and processed simultaneously. Greatly improve calculation speed. Quantum computing has shown the potential to surpass classical computing in certain specific areas. For example, Shor's algorithm (proposed in 1994) based on quantum computing can theoretically achieve decomposition of large numbers far exceeding the speed of classical calculations. In March 2016, humans used Shor's algorithm for the first time to complete the prime factorization of the number 15 in a scalable way.

This means that currently widely used asymmetric encryption algorithms, including RSA based on large integer decomposition and ECC based on elliptic curve random numbers, will be easily cracked in the future. Of course, modern cryptography will not collapse with the advent of quantum computing. On the one hand, quantum computing devices are still far away from practical general-purpose computers, allowing cryptographers to explore more secure cryptographic algorithms. On the other hand, many security mechanisms have not yet found quantum algorithms that can accelerate cracking, including digital signatures (based on Hash), () passwords, encoding-based passwords, etc.

Quantum communication can provide a mechanism for securely negotiating keys, and is expected to achieve an unconditionally secure "one-time password." Quantum communication is based on the quantum entanglement effect. Two entangled quanta can synchronize their states in real time over long distances. Once the channel is eavesdropped, both communicating parties will be informed of the situation and discard the leaked information of this transmission. This feature is very suitable for large-scale key distribution. For example, the BB84 protocol proposed in 1984 combines quantum channels and open channels to achieve secure key distribution.

Note: One-time password: first proposed to achieve theoretically absolutely secure symmetric encryption. Its characteristic is that the key is truly random and only used once; the key length is the same as the plaintext, and the encryption process is a binary XOR operation between the two.

social engineering

Cryptography and security issues have always been important topics of great concern to both academia and industry, and related technologies are constantly developing and improving. However, even with theoretically perfect technology, there is no perfect system. Countless examples have proven that when a seemingly well-designed system is eventually compromised, it's not because of deep vulnerabilities in the design. Problems often lie in areas that are very obvious in hindsight.

For example, the system administrator pastes the login password in front of the computer; financial personnel leak users' sensitive personal information on the phone; company employees randomly run attachments in unknown emails; unknown people enter the office to steal information in the name of promotion or questionnaire …

Kevin David, a famous computer hacker and security consultant, successfully broke into the North American Air Defense Command System when he was 15 years old. In his book "The Art of," he reveals countless cases of easy access to various security information through social media. engineering means.

标签: #Cryptography #Quantum Computing

  • 评论列表

留言评论