Controlling and protecting access to patent data using blockchain technology

Schema security analysis

Data Privacy Security

As shown in Figure 5, different access control structures have a greater impact on the rate of data encryption. Among them, the encryption and decryption speed of data files is associated with the access control structure. The more complex the access control structure, the slower the rate of encryption and decryption. On the contrary, the simpler the structure, the faster the rate of encryption and decryption. With the increasing number of attributes involved in the access control policy, the time spent on data encryption operations is gradually increasing. However, the increase is almost stable, indicating that the increase in overhead is acceptable. This result shows that distributed storage of patent data is safe and can meet the requirements of offsite storage for examination.

Figure 5

Relationship between number of attributes and encryption overhead.

Data operator security

As shown in Fig. 6, different data operators correspond to the number of attribute authority data from 1 to 4. The difference in the number of attribute authorities has a significant impact on the encryption parameter calculation time. The higher the number of attribute authorities, the longer the encryption takes; the more attribute authority, the more corresponding attribute management sets and the more parameters to manage. This result also shows that the data operator cannot obtain patent data, steal data resources or cause data leaks.

Figure 6
number 6

Relationship between number of attribute authorities and cipher overhead.

Data manager security

As shown in Figure 7, the number of authorizations is associated with the data encryption delay. The more permissions the data owner has to manage, i.e. the more access control policies there are, the greater the encryption overhead. Therefore, even if there are more data handlers, the specific information of patent data cannot be obtained efficiently. Therefore, more managers are required to grant the appropriate permissions to access data content.

Picture 7
number 7

Relationship between the number of authorizations and the encryption overhead.

Data owner security

Figure 8A shows the time taken to decrypt the file, and Figure 8B shows the time taken to update the ciphertext. The more attributes involved in data decryption delay and ciphertext, the greater the decryption overhead. Due to the distributed attribute management architecture and the ciphertext update calculation process, only part of the ciphertext needs to be updated when the attribute is updated, effectively reducing the time ciphertext update after attribute update. The ciphertext update delay and the classic CP-ABE cipher mechanism21 have been significantly improved. Data owners establish a representative of security services, effectively preventing data leakage from storage product vendors, data management vendors, and system vendors. The traceability and non-tampering characteristics of the blockchain are used. Through blockchain transaction management to access control policy and attributes, this function realizes policy management and tracking of the whole process of policy release, update and revocation. The strategy is stored in the blockchain in an open and transparent form. Any user can query it. The query function is separated from the traditional access control service mode by the third party. This function solves the problem of the transparency of the jurisdictional judgment.

Figure 8
figure 8

Ciphertext time overhead after decrypting and updating attribute (A Decryption overhead; B Update ciphertext system time).

Model performance analysis

Compute overhead analysis

Figures 9A-D illustrate key overhead, encryption overhead, decryption overhead, and computational overhead under different data sets. The proposed model is compared to the KP-ABE (Ciphertext Policy Attribute-Based Encryption) algorithm.22. The overhead of the proposed model encryption algorithm and the KP-ABE algorithm all increase linearly with increasing number of attributes. In the proposed model, the overhead of the key generation algorithm increases linearly as the number of attributes increases. In the KP-ABE algorithm, the overhead of the key generation algorithm increases exponentially as the number of attributes increases. In the proposed model, the overhead of the decryption algorithm is less than the overhead of the encryption algorithm. This is because the decryption algorithm takes fewer exponential operations. The time taken to encrypt a 10MB file with 64-bit data and 128-bit data is 35ms and 105ms respectively. The results of all experiments show that using local resources at branch offices for decryption can reduce the patent office’s cloud computing overhead.

Picture 9
number 9

Compute Overhead Performance Analysis (A Key overheads; B Encryption overhead; VS Decryption overhead; D computational overhead).

Storage Overcommit Analysis

Figure 10A displays the encryption algorithm overhead, and Figure 10B displays the decryption algorithm overhead. Schemes based on DS-EA and BE cost the least. Compared with the scheme based on ABE (Attribute-Based Encryption) and BE (Based Encryption) schemes, DS-EA can greatly reduce key storage overhead. In this scheme, users only need to store their private keys and system settings. In comparison, users must store their access structure and corresponding private keys in the ABE-based scheme. Therefore, DS-EA only needs a small key storage overhead to implement secure cloud data collaboration services.

Picture 10
number 10

Storage Overcommit Performance Analysis (A The encryption algorithm; B decryption algorithm).

Network overhead analysis

Figure 11A shows the network overhead of the encryption algorithm, and Figure 11B shows the network overhead of the re-encryption algorithm. The proposed scheme only takes 1s to decrypt the 64KB data; in contrast, the algorithm proposed in previous research takes 1.5s. Although the proposed scheme decryption algorithm needs to perform a matching operation for each piece of data, the operation only needs to be performed once and the computation can be performed at the very beginning. When the number of receivers increases, the encryption time consumption is almost stable. Therefore, the DS-EA scheme is easy to extend to cloud computing. The experimental results show that DS-EA is lightweight and can effectively apply to practice. This algorithm can reduce the patent office encryption data storage space and save storage effectively.

Picture 11
figure 11

Additional time of encryption algorithms in SECO, ABE-based scheme and BE-based scheme (A The encryption algorithm; B The re-encryption algorithm).

Encryption performance analysis

Figure 12A illustrates encryption performance results under different values ​​of k, and Figure 12B presents encryption performance results under different data sets. Only 1% of data requires asymmetric encryption, which greatly reduces encryption computational load while increasing encryption speed and ensuring data security. Compared to state-of-the-art algorithms, the proposed algorithm has significant advantages when the K value is high.

Picture 12
figure 12

Percentage of users with privacy leaks under different k-values ​​and dataset sizes (A Under different values ​​of k; B under different datasets).

Analysis of test performance

Figures 13A to D represent the MAE (Mean Absolute Error) results of the model under a = 0.5 Count request, a= 1.0 Count request, a= 0.5 Sum query, and a= 1.0 Sum request. Figures 14A–D represent the MRE (Mean Relative Error) results of the model under a= 0.5 Count request, a= 1.0 Count request, a= 0.5 Sum query, and a= 1.0 Sum request. In all cases, whether MAE or MRE, the results of the proposed algorithm are lower than those of the Dwork algorithm23. When the request size is 3 and a= 0.5, the MAE of the result of the Count query of the proposed algorithm is less than 20; on the other hand, the result of the Dwork algorithm is close to 70. When the query size is 4 and a= 0.5, the MRE of the result of the Sum query of the proposed algorithm is less than 0.1; however, the result of the Dwork algorithm is greater than 0.2. As the request size increases, not only the MAE but also the MRE decreases. Moreover, as aincreases, MAE and MRE decrease.

Figure 13
figure 13

The MAE of different request sizes under different privacy (A a = 0.5 Count request; B a = 1.0 Count request; VS a = 0.5 Sum query; D a = 1.0 Sum query).

Picture 14
figure 14

The MRE of different request sizes under different privacy (A a = 0.5 Count request; B a = 1.0 Count request; VS a = 0.5 Sum query; D a = 1.0 Sum query).

Figure 15A shows the relative model error result under the Count query, and Figure 15B shows the relative model error result under the Sum query. As the size of the dataset increases, the relative error rate decreases. As the dataset reaches 1,500,000, and a= 0.5, the relative error rate of the Sum query result is 0.7; when the size of the data set is 4,500,000, the relative error rate is less than 0.6. Therefore, the algorithm can provide higher data availability for large-scale multidimensional datasets.

Picture 15
number 15

Relative error rate under different privacy budgets and dataset sizes (A Count request; B Sum query).

Comments are closed.