Prosecution Insights
Last updated: April 19, 2026
Application No. 17/945,004

METHOD, COMPUTING DEVICE AND COMPUTER-READABLE MEDIUM FOR CLASSIFICATION OF ENCRYPTED DATA USING DEEP LEARNING MODEL

Final Rejection §103
Filed
Sep 14, 2022
Examiner
KOWALIK, SKIELER ALEXANDER
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Daegu Gyeongbuk Institute Of Science And Technology
OA Round
2 (Final)
22%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
2 granted / 9 resolved
-32.8% vs TC avg
Strong +88% interview lift
Without
With
+87.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
41.0%
+1.0% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
4.5%
-35.5% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§103
DETAILED ACTION Claims 1, 4-8 are presented for examination This office action is in response to submission of application on 9-SEPTEMBER-2022. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 17-OCTOBER-2025 in response to the non-final office action mailed 8-SEPTEMBER-2025 has been entered. Claims 1, 4-8 remain pending in the application. With regards to the 101 rejection, the rejection to claim 1 has been overcome by the applicant’s amendments. The applicant’s amendments and arguments set forth have been found to be persuasive and have sufficiently overcome the 101 rejection. With regards to the 103 rejections, the applicant’s amendments to the claims have not overcome the rejections to claims 1-8 as newly found prior art FENG sufficiently teaches the newly added limitations of the amended claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over ROWELL (U.S. Pub. No. US 20190158813 A1) in view of JAVIDI (U.S. Pub. No. US 12287892 B2) in view of FENG (U.S. Pub. No. US 11909563 B2) Regarding claim 1, ROWELL substantially teaches the claim including: A method for classifying encrypted data using a deep learning model executed in a computing device including at least one processor and at least one memory, the method comprising: a data augmentation step of generating a plurality of modified data with respect to a corresponding original data by modifying the plurality of original data among a plurality of original data corresponding to unstructured image data; ([0150] Each shift function operates on one or more coordinates of position data to shift color data in defined pixel increments. Depending on the position data coordinates modified by the shift function, image data included in the image frames is manipulated in a variety of ways including horizontal shifts, vertical shifts, rotational shifts, and scalar shifts. (image data is unstructured.)) While ROWELL teaches modifying unstructured data, it does not explicitly teach: a data encryption step of encrypting the corresponding original data and each of the plurality of modified data generated through the data augmentation step wherein a plurality of random phase masks are used for encrypting one image data by a double random phase encoding (DRPE) method among optical-based encryption methods; However, in analogous art that similarly uses image data, JAVIDI teaches: a data encryption step of encrypting the corresponding original data and each of the plurality of modified data generated through the data augmentation step wherein a plurality of random phase masks are used for encrypting one image data by a double random phase encoding (DRPE) method among optical-based encryption methods; ((Col. 5, lines 12-19) One solution of the disclosure relates to encrypting the input to the system, wherein the encryption is performed in the optical domain, prior to digitization. This solution may remedy the imbalance between the protector and attacker by involving different tools in the protector's operation, and due to the specific nature of the optical encryption, making it practically impossible for an attacker to break the encryption, as further detailed below. (Col. 15, lines 13-42) First, the data used for training is optically encrypted prior to the training process, which makes the machine learning algorithm robust, since it cannot be attacked without access to the optical hardware representing the key. Second, the combination of diverse modalities, provided by the optical hardware encryption and digital model, introduces robustness by making the system more complex and less accessible for the attacker. This is generally considered safer than post-acquisition software encryption, which is more vulnerable to, for example, to computerized brute-force or KPA attacks, as the hacking process that must be executed when employing optical encryption or software-based optical encryption is much more arduous and time consuming. Furthermore, the proposed encryption introduces asymmetry between the defending and attacking tools because the defender has designed the optical encryption (e.g., optical hardware), whereas in a brute force hacking scenario, the attacker would have to gain physical access to the optical hardware and interrogate, sabotage, or replace it. Third, the optical encryption may allow for a very large combination of different optical parameter values, resulting in a correspondingly complex encryption key, which would be extremely difficult, if not impossible to reverse-engineer. Further, it is possible to use optical encryption techniques which prevent the formulation of a differentiable mathematical model for the overall encryption and model process, such as photon counting DRPE, thereby disabling attack approaches that are essentially based on differentiable models. ) and a data classification step of labeling the encrypted data with any one of a plurality of classification items for classifying the encrypted data without the process of decrypting the encrypted data by inputting each of the data, which is encrypted through the data encryption step, into a deep learning-based inference model. ((col 5, lines 62-65) Optical encryption may be associated with a key, that describes the encryption, including for example the encryption type such as adding diffraction, and the encryption details, such as the diffraction parameter values. (col 12, lines 35-49) In block 608, the set of optically encrypted image data may be provided to a trained machine learning model, such as DNN, for example machine learning model 308. The trained machine learning model may have been trained to perform the required task, and provide a prediction, e.g., classification, detecting a region in the image, segmentation, deblurring, or the like. Said training may be associated with the same encryption key used for encrypting the image-bearing light. In a non-limiting example, machine learning model 308 may be trained upon a plurality of pairs of training image datasets and labels. Each such pair thus comprises an encrypted training image data, generated by encrypting an image data with the same encryption key as mentioned with respect to block 604 above, and a corresponding label. (as taught here, the model uses unseen data to make a prediction, thus it is an inference model. Further, the DNN is trained to apply classification upon encrypted data in the form of a key, consistent to the DRPE method similarly claimed.)) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with JAVIDI‘s inference model and encryption and, with ROWELL‘s data augmentation, with a reasonable expectation of success, a method for inputting data into a inference model for encryption, as in JAVIDI, where the data has been augmented, as found in ROWELL. A person of ordinary skill would have been motivated to improve security (JAVIDI, Col. 1, lines 38-44). ROWELL further teaches wherein the data augmentation step includes: a data transformation step of modifying the corresponding original data by flipping and/or shifting an image of the corresponding original data to generate the plurality of modified data, ([0150] Each shift function operates on one or more coordinates of position data to shift color data in defined pixel increments. Depending on the position data coordinates modified by the shift function, image data included in the image frames is manipulated in a variety of ways including horizontal shifts, vertical shifts, rotational shifts, and scalar shifts.) JAVIDI further teaches: And a mask transformation step of, with respect to the plurality of random phase masks corresponding to the original data, generating a plurality of modified random phase masks corresponding to each of the plurality of modified data by modifying each of the plurality of random phase masks in a same manner as the corresponding original data is modified to each of the plurality of modified data in the data transformation step, ((Col 14, lines 56-65) In some embodiments, the schemes, methods, and/or processes above may be used for secure microscopic imaging, for example as used in biomedical imaging including, for example, tissue analysis (e.g., malignant or non-malignant). Cyber physical security may be crucial due to the privacy and sensitivity in this field is, and the vulnerability to adversarial attacks which may be a matter of life and death. In such uses, DRPE or Single Random Phase Encoding (SRPE) scheme may be used to encrypt a microscopy image, for example (DRPE and SRPE both utilize mask transformations)) wherein the data encryption step includes: encrypting each of the plurality of modified data generated through the data transformation step, with the plurality of modified random phase masks corresponding to each of the plurality of modified data; ((Col 14, lines 56-65) In some embodiments, the schemes, methods, and/or processes above may be used for secure microscopic imaging, for example as used in biomedical imaging including, for example, tissue analysis (e.g., malignant or non-malignant). Cyber physical security may be crucial due to the privacy and sensitivity in this field is, and the vulnerability to adversarial attacks which may be a matter of life and death. In such uses, DRPE or Single Random Phase Encoding (SRPE) scheme may be used to encrypt a microscopy image, for example) While JAVIDI does teach using phase masks, it does not explicitly teach: and dividing the modified encrypted original data into a real part and an imaginary part and wherein, in the data classification step, the divided real part and imaginary part are combined as one data and input into the inference mode to label each of the plurality of encrypted data without the process of decrypting the encrypted data. However, in analogous art that similarly alters image data, FENG teaches: and dividing the modified encrypted original data into a real part and an imaginary part and wherein, in the data classification step, the divided real part and imaginary part are combined as one data and input into the inference mode to label each of the plurality of encrypted data without the process of decrypting the encrypted data. ((col 2, lines 3-34) In a first aspect, an embodiment of the present application provides a method for modulation recognition of signals based on cyclic residual network, comprising: obtaining a signal matrix of a to-be-recognized signal, and extracting real part information and imaginary part information of the signal matrix; wherein, the to-be-recognized signal is a signal whose modulation is to be recognized; generating, according to extracted real part information and imaginary part information, a real-and-imaginary-part feature matrix of the to-be-recognized signal; converting, according to a preset matrix conversion method, the real-and-imaginary-part feature matrix into an amplitude-and-phase feature matrix; the amplitude-and-phase feature matrix carries amplitude features and phase features of the to-be-recognized signal, and the amount of information of features carried by the amplitude-and-phase feature matrix varies with the amount of information carried by the to-be-recognized signal; and inputting the amplitude-and-phase feature matrix into a pre-trained cyclic residual network to obtain a modulation mode corresponding to the to-be-recognized signal; wherein the cyclic residual network is obtained by training according to a preset number of sample feature data items of the to-be-recognized signal and a classification label for the sample feature data items;) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with FENG‘s data division and, with ROWELL‘s, as modified by JAVIDI, data encryption, with a reasonable expectation of success, a method for dividing data into real and imaginary parts ,as in FENG, where the data has been encrypted, as found in ROWELL, as modified by JAVIDI. A person of ordinary skill would have been motivated to improve efficiency (FENG Column 1, lines 43-60). Regarding claims 7-8, they comprise of limitations similar to those of claim 1 and are therefore rejected for similar rationale. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over ROWELL (U.S. Pub. No. US 20190158813 A1), JAVIDI (U.S. Pub. No. US 12287892 B2), and FENG (U.S. Pub. No. US 11909563 B2) in further view of KRISHEVSKY (U.S. Pub. No. US 9563840 B2) in further view of FAN (U.S. Pub. No. US 20200138360 A1) Regarding claim 4, while ROWELL, as modified by JAVIDI, does teach claim 1, which claim 4 is dependent upon, it does not explicitly teach: The method of claim 1, wherein the data classification step includes: a first processing step of deriving a feature value of the encrypted data by repeatedly performing processes of inputting the encrypted data into the inference model and calculating through two convolutional layers and one max- pooling layer included in the inference model by N times (N is a natural number equal to or greater than 1); However, in analogous art that similarly processes image input, KRISHEVSKY teaches: The method of claim 1, wherein the data classification step includes: a first processing step of deriving a feature value of the encrypted data by repeatedly performing processes of inputting the encrypted data into the inference model and calculating through two convolutional layers and one max- pooling layer included in the inference model by N times (N is a natural number equal to or greater than 1); a second processing step of deriving a vector value corresponding to the number of the plurality of classification items by repeatedly performing a process of calculating the feature value of the encrypted data through a fully-connected layer included in the inference model by M times (M is a natural number equal to or greater than 1) ; ((KRISHEVSKY claim 1)A convolutional neural network system implemented by one or more computers, wherein the convolutional neural network system is configured to receive an input image and to generate a classification for the input image, and wherein the convolutional neural network system comprises: a sequence of neural network layers, wherein the sequence of neural network layers comprises: a first convolutional layer configured to receive a first convolutional layer input derived from the input image and to process the first convolutional layer input to generate a first convolved output; a first max-pooling layer immediately after the first convolutional layer in the sequence configured to pool the first convolved output to generate a first pooled output; a second convolutional layer immediately after the max-pooling layer in the sequence configured to receive the first pooled output and to process the first pooled output to generate a second convolved output, and a plurality of fully-connected layers after the second convolutional layer in the sequence configured to receive an output derived from the second convolved output and to collectively process the output derived from the second convolved output to generate a sequence output for the input image.) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with KRISHEVSKY‘s feature value derivation and, with ROWELL‘s, as modified by JAVIDI, data encryption, with a reasonable expectation of success, a method for deriving feature values, as in KRISHEVSKY, where the data has been encrypted, as found in ROWELL, as modified by JAVIDI. A person of ordinary skill would have been motivated to lower cost (KRISHEVSKY Col 1, lines 24-32). While KRISHEYSKY does teach deriving a feature value using convolutional layers and a max-pooling layer, it does not explicitly teach: and a third processing step of classifying the encrypted data as any one of the plurality of classification items by applying a softmax function to the vector value. However, in analogous art that similarly uses CNNs, FAN teaches: and a third processing step of classifying the encrypted data as any one of the plurality of classification items by applying a softmax function to the vector value. ([0254] The softmax layer can receive the output of the final convolutional layer of convolutional stage 10 and produce class probabilities for each [x,y] pixel location of the input images. The softmax layer can apply the softmax function, which is a normalized exponential function that “squashes” a K-dimensional vector of arbitrary real values to a K-dimensional vector of real values in the range of (0,1) that add up to 1. The output of the softmax layer can be a matrix of classification scores for each pixel location or an N-channel image of probabilities, where N is the number of classification classes. In some embodiments, a pixel can be assigned to a class corresponding to the maximum probability at the pixel.) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with FAN‘s softmax function and, with ROWELL‘s, as modified by JAVIDI and KRISHEVSKY, convolutional layer calculations, with a reasonable expectation of success, a method for classifying through a softmax, as in FAN, where the data has derived from a CNN, as found in ROWELL, as modified by JAVIDI and KRISHEVSKY. A person of ordinary skill would have been motivated to improve identification of the model (FAN [0011]). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over ROWELL (U.S. Pub. No. US 20190158813 A1), JAVIDI (U.S. Pub. No. US 12287892 B2), and FENG (U.S. Pub. No. US 11909563 B2) in further view of KRISHEVSKY (U.S. Pub. No. US 9563840 B2) in further view of ROAKE (U.S. Pub. No. US 20190294819 A1) in further view of MOTOKI (U.S. Pub. No. US 20200104708 A1) Regarding claim 5, while ROWELL, as modified by JAVIDI, does teach claim 1, which claim 4 is dependent upon, it does not explicitly teach: The method of claim 1, wherein the data classification step includes: a first processing step of deriving a feature value of the encrypted data by repeatedly performing processes of inputting the encrypted data into the inference model and calculating through two convolutional layers and one max- pooling layer included in the inference model by N times (N is a natural number equal to or greater than 1); However, in analogous art that similarly processes image input, KRISHEVSKY teaches: The method of claim 1, wherein the data classification step includes: a first processing step of deriving a feature value of the encrypted data by repeatedly performing processes of inputting the encrypted data into the inference model and calculating through two convolutional layers and one max- pooling layer included in the inference model by N times (N is a natural number equal to or greater than 1); ((KRISHEVSKY claim 1)A convolutional neural network system implemented by one or more computers, wherein the convolutional neural network system is configured to receive an input image and to generate a classification for the input image, and wherein the convolutional neural network system comprises: a sequence of neural network layers, wherein the sequence of neural network layers comprises: a first convolutional layer configured to receive a first convolutional layer input derived from the input image and to process the first convolutional layer input to generate a first convolved output; a first max-pooling layer immediately after the first convolutional layer in the sequence configured to pool the first convolved output to generate a first pooled output;) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with KRISHEVSKY‘s feature value derivation and, with ROWELL‘s, as modified by JAVIDI, data encryption, with a reasonable expectation of success, a method for deriving feature values, as in KRISHEVSKY, where the data has been encrypted, as found in ROWELL, as modified by JAVIDI. A person of ordinary skill would have been motivated to lower cost (KRISHEVSKY Col 1, lines 24-32). While KRISHEVSKY does teach deriving a feature value with a CNN, it does not explicitly teach: a second processing step of deriving output date having a size identical to a size of the encrypted date However, in analogous art that similarly uses encryption, ROAKE teaches: a second processing step of deriving output date having a size identical to a size of the encrypted date ([0028] As a more specific example, the plaintext values may be, for example, dates in the format of MM/DD/YYYY (where “M” represents a month digit, “D” represents a day digit and “Y” represents a year digit). For example, a set of dates may be soft limited to the range of Jan. 1, 1900 to Dec. 31, 2010, and a variance range of plus or minus five years may be imposed, resulting in a hard limit of Jan. 1, 1895 to Dec. 31, 2015. The pseudonymization process involves encrypting the plaintext dates to derive base dates, and varying the base dates based on the encrypted ancillary data. [0029] For example, the date of “Dec. 3, 1965” may be encrypted, using an FPE cipher, (when using a FPE cipher, the size of the data remains the same)) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with ROAKE‘s date data and, with ROWELL‘s, as modified by JAVIDI and KRISHEVSKY, data encryption, with a reasonable expectation of success, date data, as in ROAKE, where the data has been encrypted, as found in ROWELL, as modified by JAVIDI and KRISHEVSKY. A person of ordinary skill would have been motivated to improve data security (ROAKE [0013]). While ROWELL, as modified by JAVIDI, KRISHEVSKY, and ROAKE, does teach deriving feature data and date data, it does not explicitly teach: by repeatedly performing a process of calculating the feature value of the encrypted data through one de-convolutional layer and two convolutional layers included in the inference model by K times (K is a natural number equal to or greater than 1); However, in analogous art that similarly performs encryption, MOTOKI teaches: by repeatedly performing a process of calculating the feature value of the encrypted data through one de-convolutional layer and two convolutional layers included in the inference model by K times (K is a natural number equal to or greater than 1); ([0096] In FIG. 7, a dotted line 721 indicates predicted values (predicted values of I0.sub.n-x to I0.sub.n) of the autoregression model calculated for (x+1) data pieces in the 0-th data set. [0150] Also, an inference apparatus according to the first embodiment includes a trained model including an encoder unit having multiple convolutional layers and a decoder unit having multiple corresponding deconvolutional layers. The trained model further includes an autoregression module. The autoregression module calculates a feature indicative of a dependency of data in a predetermined direction for a data set outputted from a N-th convolutional layer (N is a positive integer) in the encoder unit and inputs the calculated feature to a N-th deconvolutional layer in the decoder unit.) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with MOTOKI‘s data derivation and, with ROWELL‘s, as modified by JAVIDI, KRISHEVSKY, and ROAKE, date data, with a reasonable expectation of success, data derivation using a de-convolutional layer and convolutional layers, as in MOTOKI, which is used to derive an output date, as found in ROWELL, as modified by JAVIDI, KRISHEVSKY, and ROAKE. A person of ordinary skill would have been motivated to improve accuracy (MOTOKI [0005]). While ROWELL, as modified by JAVIDI, KRISHEVSKY, ROAKE, and MOTOKI, does teach deriving feature data, it does not explicitly teach: and a third processing step of deriving restored data for the encrypted data by applying a sigmoid function to the output data, and classifying the encrypted data as any one of the plurality of classification items based on the restored data. However, in analogous art that similarly encrypts data, WANG teaches: and a third processing step of deriving restored data for the encrypted data by applying a sigmoid function to the output data, and classifying the encrypted data as any one of the plurality of classification items based on the restored data. ((Col 1, lines 31-45) In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of receiving, at a plurality of secure computation nodes (SCNs), a plurality of random numbers from a random number provider; encrypting, at each SCN, data stored at the SCN using the received random numbers; iteratively updating a secure logistic regression model (SLRM) by using the encrypted data from each SCN; and after iteratively updating the SLRM, outputting a result of the SLRM, wherein the result is configured to enable a service to be performed by each SCN. (Col 4, lines 4-16) Implementations of this disclosure introduce a new approach of training SLRM by using SS and an even-driven interactive secure modeling procedure. The described implementations apply an SLRM model that based on logistic regression and can be iteratively updated by feeding training data received from both parties. Logistic regression is a generalized linear regression and is one type of classification and prediction algorithms. The logistic regression algorithm estimates discrete values from a series known dependent variables and estimates the probability of the occurrence of an event by fitting the data into a logic function. Logistic regression is mainly used for classification, such as spam email classification, credit risk prediction classification, etc. (sigmoid functions are used in regression algorithms)) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with WANG‘s sigmoid and classification and, with ROWELL‘s, as modified by JAVIDI, KRISHEVSKY, ROAKE, and MOTOKI, derived data, with a reasonable expectation of success, classifying data using a sigmoid function, as in WANG, where the data is derived from an encrypted date, as found in ROWELL, as modified by JAVIDI, KRISHEVSKY, ROAKE, and MOTOKI. A person of ordinary skill would have been motivated to improve accuracy (MOTOKI [0005]). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over ROWELL (U.S. Pub. No. US 20190158813 A1), JAVIDI (U.S. Pub. No. US 12287892 B2) and FENG (U.S. Pub. No. US 11909563 B2) in further view of KRISHEVSKY (U.S. Pub. No. US 9563840 B2) in further view of YANG (U.S. Pub. No. US 20220164576 A1) Regarding claim 6, while ROWELL, as modified by JAVIDI, does teach claim 1, which claim 6 is dependent upon, it does not explicitly teach: The method of claim 1, wherein the data classification step includes: a first processing step of deriving a first feature value of the encrypted data by performing processes of inputting the encrypted data into the inference model and calculating through a first convolutional layer and a max- pooling layer included in the inference model; However, in analogous art that similarly encrypts data, KRISHEVSKY teaches: The method of claim 1, wherein the data classification step includes: a first processing step of deriving a first feature value of the encrypted data by performing processes of inputting the encrypted data into the inference model and calculating through a first convolutional layer and a max- pooling layer included in the inference model; and a third processing step of classifying the encrypted data as any one of the plurality of classification items by performing a process of calculating the second feature value through an average-pooling layer and a fully-connected layer included in the inference model ((KRISHEVSKY claim 1)A convolutional neural network system implemented by one or more computers, wherein the convolutional neural network system is configured to receive an input image and to generate a classification for the input image, and wherein the convolutional neural network system comprises: a sequence of neural network layers, wherein the sequence of neural network layers comprises: a first convolutional layer configured to receive a first convolutional layer input derived from the input image and to process the first convolutional layer input to generate a first convolved output; a first max-pooling layer immediately after the first convolutional layer in the sequence configured to pool the first convolved output to generate a first pooled output; a second convolutional layer immediately after the max-pooling layer in the sequence configured to receive the first pooled output and to process the first pooled output to generate a second convolved output, and a plurality of fully-connected layers after the second convolutional layer in the sequence configured to receive an output derived from the second convolved output and to collectively process the output derived from the second convolved output to generate a sequence output for the input image.) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with KRISHEVSKY‘s feature value derivation and, with ROWELL‘s, as modified by JAVIDI, data encryption, with a reasonable expectation of success, a method for deriving feature values, as in KRISHEVSKY, where the data has been encrypted, as found in ROWELL, as modified by JAVIDI. A person of ordinary skill would have been motivated to lower cost (KRISHEVSKY Col 1, lines 24-32). While ROWELL, as modified by JAVIDI and KRISHEVSKY, does teach deriving feature data from encrypted data by using convolutional, pooling, and fully connected layers, it does not explicitly teach: a second processing step of deriving a second feature value based on output value finally derived from a last block module by repeating processes of inputting the first feature value into a first block module among a plurality of block modules composed of two second convolutional layers included in the inference model, and inputting an output value derived from the first block module into a second block module; However, in analogous art that similarly teaches using image data, YANG teaches: a second processing step of deriving a second feature value based on output value finally derived from a last block module by repeating processes of inputting the first feature value into a first block module among a plurality of block modules composed of two second convolutional layers included in the inference model, and inputting an output value derived from the first block module into a second block module; ([0029] In addition, it is worth noting that the global identification module 121, the local identification module 122, and the component identification module 123 may be pre-trained to realize their image identification and analysis functions. In this regard, the user may capture multiple reference surgical instrument images (for example, 100) in advance, and input the multiple reference surgical instrument images having a BBOX list composed of all target marking boxes in the images into the above-mentioned convolutional neural network calculation model or fast convolutional neural network calculation model of the global identification module 121, the local identification module 122, and the component recognition module 123 for training and inference, to enable the convolutional neural network calculation model or fast convolutional neural network calculation model of the global identification module 121, the local identification module 122 and component identification module 123 to correspondingly output multiple identification results. Then, after data conversion (such as converting the BBOX list into a user-editable Pascal VOC XML format), the user may amend the BBOX list of each of the reference surgical instrument images in the multiple identifications results, and then re-input the multiple reference surgical instrument images having the amended BBOX list to the above-mentioned convolutional neural network calculation model or fast convolutional neural network calculation model of the global identification module 121, the local identification module 122, and the component identification module 123 for training. In this regard, the above-mentioned convolutional neural network calculation model or fast convolutional neural network calculation model of the global identification module 121, the local identification module 122, and the component identification module 123 may be, for example, trained through multiple cycles and continuously adding multiple references surgical instrument images (such as, adding 1000 pieces) to enable the above-mentioned global identification module 121, the local identification module 122 and the component identification module 123 to accurately and effectively identify the global image features, the global image features and the component image features. (it should be noted that modules can be considered a type of ‘block’)) It would have been obvious to a person skilled in the art before the effective filing date of the invention to have combined with YANG‘s block modules and, with ROWELL‘s, as modified by JAVIDI and KRISHEVSKY, data encryption, with a reasonable expectation of success, block modules used for deriving data, as in YANG, where the data has been encrypted, as found in ROWELL, as modified by JAVIDI and KRISHEVSKY. A person of ordinary skill would have been motivated to improve accuracy (YANG [0002]). Response to Arguments Applicant’s arguments filed 17-OCTOBER-2025 have been fully considered, but they are found to be non-persuasive With regards to the applicant’s remarks regarding the 103 rejection in the non-final action, the applicant argues that the prior art does not teach the newly amended claims 1, 7, and 8. The examiner acknowledges this argument and has adjusted the prior art of ROWELL and JAVIDI to disclose the newly added limitations while adding new prior art FENG. Further, the examiner has adjusted all dependent claims accordingly. Additionally, in regards to mappings that have not been changed, the applicant further argues: By ROWELL fails to disclose or suggest any of the above Features 1-6. Specifically, ROWELL merely discloses conventional data augmentation such as pixel shifting or image rotation, and does not disclose or suggest "generating a plurality of modified data with respect to a corresponding original data" as recited in Feature 1. In addition, as the Examiner has also acknowledged, ROWELL does not disclose or suggest Features 2-6. With regards to this argument, ROWELL, while it does disclose conventional data augmentation, does indeed teach generating modified data. By the act of augmenting input/original data and performing a shift or transformation on the data to create/generate new data that has been changed/modified through the transformation, ROWELL does in fact generate modified data that is in respect to the original image data. Further, ROWELL does not need to disclose features 2-6 as it was not mapped to those features. JA VIDI also fails to disclose or suggest the above Features 1 and 3-6. Specifically, JA VIDI does not disclose "generating a plurality of modified data with respect to a corresponding original data" as recited in Feature 1. In addition, while JA VIDI describes encryption and classification using a single random phase-mask pair, it never teaches or suggests "using a plurality of masks that are synchronously modified in the same manner as the corresponding plurality of modified data" as recited in Features 3 and 4. This synchronized data-mask transformation is a key aspect of the present invention and is entirely absent from JAVIDI. In addition, as also recognized by the Examiner, JA VIDI does not disclose Features 5 and 6. With regards to this argument, the examiner agrees that the mapping provided from JAVIDI did not disclose the newly amended limitations and has sufficiently adjusted JAVIDI’s mappings to sufficiently teach the newly added limitations argued here. This (3) The Examiner asserts that WANG '142 (US 2024/0056142 Al) discloses the limitation in original Claim 3 concerning "dividing encrypted data into a real part and an imaginary part." However, WANG '142 does not disclose the complete structure of Features 5 and 6, which include "dividing each of the plurality of encrypted data into a real part and an imaginary part and combining the divided real and imaginary parts as one data to be input into the inference model without any decryption process." Furthermore, Features 5 and 6 derive their meaning and technical significance only in conjunction with the above Features 1-4, which establish the synchronized relationship between data and masks. Since WANG '142 fails to disclose or even contemplate Features 1 through 4, one of ordinary skill in the art would not be motivated, without impermissible hindsight, to derive Features 5 and 6 from WANG '142. Therefore, it would not have been obvious for a person having ordinary skill in the art to conceive amended Claim 1 based on the cited references which fail to disclose or suggest all of Features 1-6 before the effective filing date of amended Claim 1. With regards to this argument, the examiner acknowledges the deficiencies of WANG in view of the newly added amendments and arguments presented and has replaced the prior art WANG with new prior art FENG. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SKIELER A KOWALIK whose telephone number is (571)272-1850. The examiner can normally be reached 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at (571)270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SKIELER ALEXANDER KOWALIK/Examiner, Art Unit 2142 /Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Sep 14, 2022
Application Filed
Sep 01, 2025
Non-Final Rejection — §103
Oct 17, 2025
Response Filed
Mar 19, 2026
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
22%
Grant Probability
99%
With Interview (+87.5%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month