Prosecution Insights
Last updated: April 19, 2026
Application No. 17/831,177

PERFORMING INFERENCE USING SIMPLIFIED REPRESENTATIONS OF CONVOLUTIONAL NEURAL NETWORKS

Non-Final OA §103
Filed
Jun 02, 2022
Examiner
GEBRESLASSIE, WINTA
Art Unit
2677
Tech Center
2600 — Communications
Assignee
VIANAI SYSTEMS, INC.
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
101 granted / 133 resolved
+13.9% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
53 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 133 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/17/2026 has been entered. Response to Arguments Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-4, 7, 10-14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chandrashekar et al. NPL “CRL: Class Representative Learning for Image Classification”. in view of Saptharishi et al. (US 20120274777 A1). Regarding claim 1, Chandrashekar et al. teach a computer-implemented method for performing inference operations associated with a trained machine learning model (see Fig. 2, step 2 disclose inference operation associated with CNN, page 4, col.1 2nd para; “As shown in Figure 2, the CRL model is composed of two primary components such as CR Generation and CR-based Inferencing”), the method comprising: storing a plurality of image representations that are associated with a plurality of output classes predicted by the trained machine learning model, wherein image representations of the plurality of image representations represent image sets for output classes of the plurality of output classes (see page 1, Abstract “first, the learning step is to build class representatives to represent classes… Second, the inferencing step in CRL is to match between the class representatives and new data”, see also page 6, section E; “The CR-based inferencing is a mapping between the input and Class Representatives (CRs) and label it with a class” Note; CRL necessarily maintain the learned CRs for later inference (“matching…against the available CRs”) i.e., they are available to the inference engine as stored/class-maintained representations); performing, by an inference engine, a first comparison that compares a first input image to the plurality of image representations (see Abstract; “Second, the inferencing step in CRL is to match between the class representatives and new data”, see also page 6, right col. section E. last para; “The cosine similarity between the new input (NI) and Class Representatives for class c (CR(c)), where c ∈ C can be computed…the label for the new input from CRL Model cˆ is predicted by selecting the class from all classes C that has the highest cosine similarity to the new input. The CR-based inferencing is a mapping between the input and Class Representatives (CRs) and label it with a class” Note; to match or mapping the input and CRs implies comparing a first input image with a plurality of image representations); and generating, by the inference engine, a first prediction that indicates that the first input image is a member of the first output class (see also page 6, right col. last para; “As shown in Equation 8, the label for the new input from CRL Model cˆ is predicted by selecting the class from all classes C that has the highest cosine similarity to the new input. The CRL model will conduct inferencing by matching the new input against the available CRs and label it with a class having the highest cosine similarity score”). Chandrashekar et al. does not disclose storing a plurality of alternative image representations that are associated with the plurality of output classes, wherein the alternative image representations are different from the image representations, and wherein alternative image representations of the plurality of alternative image representations represent the image sets for the output classes; in response to determining, by the inference engine, that the first input image does not match any image representation included in the plurality of image representations, including a first image representation for a first output class; subsequently performing, by the inference engine, a second comparison that determines that the first input image does match a first alternative image representation from the plurality of alternative image representations, wherein the first alternative image representation is associated with a first output class, and the first alternative image representation is different from the first image representation. In the same field of endeavor, Saptharishi et al. teaches storing a plurality of alternative image representations that are associated with the plurality of output classes, wherein the alternative image representations are different from the image representations (see pare [0025]; “The first object has a first signature representing features of the first object derived from the images of the first group… The second object is detected in a second image distinct from the images of the first group. The second signature represents features of the second object derived from the second image”, see also para [0049]; “a feature combination supplied to the first stage (stage 1) of FIG. 5A, for example, may be different from or the same as the feature combination supplied to the second stage (stage 2)…..Each stage 500, therefore, has a corresponding feature combination associated with it”, Note: feature combination/representation used in stage 1 correspond to image representation, and feature combination/representation used in stage 2 are corresponding to alternative image representations), and wherein alternative image representations of the plurality of alternative image representations represent the image sets for the output classes (see para [0046]; “the features …may represent feature vectors (e.g., histograms in which the histogram bins correspond to vector components) of the appearance characteristics and may be used by the match classifier 218 to determine whether objects match”, Note: The "representations" are specifically "feature vectors" (like histograms), and "representing image sets" translates to "determining whether objects match" (classification); in response to determining, by the inference engine, that the first input image does not match any image representation included in the plurality of image representations, including a first image representation for a first output class (see para [0053]; “The decision step value is compared (represented by block 506) to one or both of an acceptance threshold .tau..sub.a and a rejection threshold .tau..sub.r to determine whether two objects match, to reject the objects as a match, or to forward the decision to the next step 400”, see also para [0055]; “If the decision step value is less than or equal to the rejection threshold .tau..sub.r, the first and second objects are rejected as a match (step 616)”, and para [0097]; “If none of the classified objects 1202 match the first tracked object, information is generated to indicate a non-match”); subsequently performing, by the inference engine, a second comparison that determines that the first input image does match a first alternative image representation from the plurality of alternative image representations (see para [0055]; “If the decision step value is greater than the rejection threshold .tau..sub.r but less than or equal to the acceptance threshold .tau..sub.a, the input z is forwarded to the second step 400 (or, in the alternative, only those feature combinations used by the second step 400 are transmitted to the second step 400) (step 618). The first and second objects may be accepted or rejected as a match at any step 400 within the cascade”), wherein the first alternative image representation is associated with a first output class, and the first alternative image representation is different from the first image representation (see para [0049]; “a feature combination supplied to the first stage (stage 1) of FIG. 5A, for example, may be different from or the same as the feature combination supplied to the second stage (stage 2)…..Each stage 500, therefore, has a corresponding feature combination associated with it”, see also para [0096]; “classifies the objects of the current frame as either a member of one of the object classes (e.g., human, vehicle) or as "unknown”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date to the claimed invention to modify the general use of class representative learning model (CRL), that can be especially effective in image classification of Chandrashekar et al. in view of a method of tracking an object captured by a camera system of Saptharishi et al. in order to improve robustness and avoid premature/erroneous assignments when the first comparison is not sufficiently confident, yielding predictable results (see para [0025). Regarding claim 2, the rejection of claim 1 is incorporated herein. Chandrashekar et al. in the combination further teach further comprising: comparing a second input image with the plurality of image representations (see page 13, right col. last para; “the inferencing can be conducted through matching new inputs with CRs” Note: new inputs implies second input image); determining that the second input image does match a second image representation included in the plurality of image representations (see page 6, section E; “The cosine similarity between the new input (NI) and Class Representatives for class c (CR(c)), where c ∈ C can be computed The CRL Model assigns the new input with the label associated with Class c that has the highest cosine similarity score. The higher cosine similarity score indicates the closeness between the Class Representative CR(c) and the new input (NI) in the Class Representative Feature Space (CRFS)”, see also page and page 12, left col. 3rd para; “This result confirms that the CRL model can be used to validate the distribution of data in terms of dissimilarities and similarities of CRs” Note: second image representation included in the plurality); and generating a second prediction that indicates that the second input image is a member of a second output class to which the second image representation is mapped within a simplified representation of the trained machine learning model (see step 2 of Fig. 2 disclosed simplified representation of the trained machine learning model, page 6, right col. last para; “As shown in Equation 8, the label for the new input from CRL Model cˆ is predicted by selecting the class from all classes C that has the highest cosine similarity to the new input. The CRL model will conduct inferencing by matching the new input against the available CRs and label it with a class having the highest cosine similarity score”, Note: the second image representation (CR(c) is mapped to the output class label c in CRL’s class-representative (simplified) model, and generating the second prediction is the label assignment for the second input image). Regarding claim 3, the rejection of claim 1 is incorporated herein. Chandrashekar et al. in the combination further teach wherein comparing the first input image with the plurality of image representations comprises computing a plurality of vector similarities between a plurality of pixel values included in the first input image and the plurality of image representations (see page 3 left col. 5th para; “we are using a vector space model with a cosine similarity measurement”, see also page 4, left col. 2nd para; “the Source environment was mainly used as a reference standard for producing a feature vector of the input data in space. Figure 2(a) shows the Source environment (i.e., Pre-trained model), and Figure 2(b) shows the inferencing process with CRs on how a new image is projected on the Source environment and is mapped it on to the CRs for classification”). Regarding claim 4, the rejection of claim 3 is incorporated herein. Chandrashekar et al. in the combination further teach wherein determining that the first input image does not match any image representation included in the plurality of image representations comprises determining that each vector similarity included in the plurality of vector similarities does not meet a minimum threshold needed for similarity (see page 3 left col. 5th para; “we are using a vector space model with a cosine similarity measurement”, see page 6 right col. section E; “Given a new input is vectorized in the source environment to the Class Representative Feature Space (CRFS), NI = ˆa(x (i) ), as shown in Equation 2. The cosine similarity between the new input (NI) and Class Representatives for class c (CR(c)), where c ∈ C can be computed using Equation 7. The CRL Model assigns the new input with the label associated with Class c that has the highest cosine similarity score. The higher cosine similarity score indicates the closeness between the Class Representative CR(c) and the new input (NI) in the Class Representative Feature Space (CRFS”, and page 12, right col., 3rd para; “This result confirms that the CRL model can be used to validate the distribution of data in terms of dissimilarities and similarities of CRs. The classification accuracy can be estimated based on the CR distribution model”). Regarding claim 7, the rejection of claim 1 is incorporated herein. Chandrashekar et al. in the combination further teach further comprising: comparing a second input image with the plurality of image representations (see page 13, right col. last para; “It was possible because the CRs can be generated in a parallel and distributed manner, and the inferencing can be conducted through matching new inputs with CRs”). Saptharishi et al. in the combination further teach determining that the second input image does not match any image representation included in the plurality of image representations (see para [0048]; “The decision step value s(z) may indicate whether the first and second object match, and may include a value corresponding to a confidence level in its decision”, see also para [0055]; “If the decision step value is less than or equal to the rejection threshold .tau..sub.r, the first and second objects are rejected as a match”), comparing the second input image with the plurality of alternative image representations associated with the plurality of output classes (see para [0055]; “If the decision step value is greater than the rejection threshold .tau..sub.r but less than or equal to the acceptance threshold .tau..sub.a, the input z is forwarded to the second step 400 (or, in the alternative, only those feature combinations used by the second step 400 are transmitted to the second step 400) (step 618). The first and second objects may be accepted or rejected as a match at any step 400 within the cascade”); determining that the second input image does not match any alternative image representation included in the plurality of alternative image representations (see para [0055]; “The first and second objects may be accepted or rejected as a match at any step 400 within the cascade”); and generating a second prediction that indicates that the second input image is not a member of any output class included in the plurality of output classes (see para [0096]; “Objects are detected in a current frame and the object classification module 210 classifies the objects of the current frame as either a member of one of the object classes (e.g., human, vehicle) or as "unknown”). Regarding claim 10, the rejection of claim 1 is incorporated herein. Chandrashekar et al. in the combination further teach wherein the trained machine learning model comprises a trained convolutional neural network (see Abstract; “, the learning step is to build class representatives to represent classes in datasets by aggregating prominent features extracted from a Convolutional Neural Network (CNN)”). Regarding claim 11, the scope of claim 11 is fully encompassed by the scope of claim 1, accordingly, the rejection of claim 1 is fully applicable here (see also para [0135]; “they can exist partly or wholly as one or more software programs comprised of program instructions in source code, object code, executable code or other formats. Any of the above can be embodied in compressed or uncompressed form on a computer-readable medium, which include storage devices. Exemplary computer-readable storage devices” of Saptharishi et al.). Regarding claim 12, the rejection of claim 11 is incorporated herein. Chandrashekar et al. in the combination further teach wherein the instructions further cause the one or more processors to perform the steps of: comparing a second input image with the plurality of image representations (see page 13, right col. last para; “the inferencing can be conducted through matching new inputs with CRs” Note: new inputs implies second input image); determining that the second input image does match a second image representation included in the plurality of image representations (see page 6, section E; “The cosine similarity between the new input (NI) and Class Representatives for class c (CR(c)), where c ∈ C can be computed The CRL Model assigns the new input with the label associated with Class c that has the highest cosine similarity score. The higher cosine similarity score indicates the closeness between the Class Representative CR(c) and the new input (NI) in the Class Representative Feature Space (CRFS)”, see also page and page 12, left col. 3rd para; “This result confirms that the CRL model can be used to validate the distribution of data in terms of dissimilarities and similarities of CRs” Note: second image representation included in the plurality); and generating a second prediction that indicates that the second input image is a member of a second output class to which the second image representation is mapped within a simplified representation of the trained machine learning model (see step 2 of Fig. 2 disclosed simplified representation of the trained machine learning model, page 6, right col. last para; “As shown in Equation 8, the label for the new input from CRL Model cˆ is predicted by selecting the class from all classes C that has the highest cosine similarity to the new input. The CRL model will conduct inferencing by matching the new input against the available CRs and label it with a class having the highest cosine similarity score”, Note: the second image representation (CR(c) is mapped to the output class label c in CRL’s class-representative (simplified) model, and generating the second prediction is the label assignment for the second input image). Regarding claim 13, the rejection of claim 12 is incorporated herein. Chandrashekar et al. et al. further teach wherein determining that the second input image does match the second image representation comprises determining that a first similarity between the second input image and the second image representation is higher than a threshold for minimum similarity and a second similarity between the second input image and a second image representation included in the plurality of image representations (see table V, page 6 left col., section E; “The higher cosine similarity score indicates the closeness between the Class Representative CR(c) and the new input (NI) in the Class Representative Feature Space (CRFS). cˆ = argmax c∈C {cos(CR(c), NI)} (8) As shown in Equation 8, the label for the new input from CRL Model cˆ is predicted by selecting the class from all classes C that has the highest cosine similarity to the new input. The CRL model will conduct inferencing by matching the new input against the available CRs and label it with a class having the highest cosine similarity score”, see also page 13 right col., 1st para; “We can further extend it to determine the common and unique features of the CR vectors and find the weights that maximize the uniqueness between CRs”, and page 10, left col. 3rs para; “we evaluate by reducing the dimensions using standard reduction techniques, namely minimum pooling (MinPool), maximum pooling (MaxPool), and average pooling (AvgPool). The pooling in CBL was implemented on the [8x8,192] feature vector (Layer 10) with the filter size of 2x2 transforming into [4x4,192]”). Regarding claim 14, the rejection of claim 12 is incorporated herein. Chandrashekar et al. in the combination further teach wherein the instructions further cause the one or more processors to perform the steps of: comparing a second input image with the plurality of image representations (see page 13, right col. last para; “It was possible because the CRs can be generated in a parallel and distributed manner, and the inferencing can be conducted through matching new inputs with CRs”). Saptharishi et al. in the combination further teach determining that the second input image does not match any image representation included in the plurality of image representations (see para [0048]; “The decision step value s(z) may indicate whether the first and second object match, and may include a value corresponding to a confidence level in its decision”, see also para [0055]; “If the decision step value is less than or equal to the rejection threshold .tau..sub.r, the first and second objects are rejected as a match”), comparing the second input image with the plurality of alternative image representations associated with the plurality of output classes (see para [0055]; “If the decision step value is greater than the rejection threshold .tau..sub.r but less than or equal to the acceptance threshold .tau..sub.a, the input z is forwarded to the second step 400 (or, in the alternative, only those feature combinations used by the second step 400 are transmitted to the second step 400) (step 618). The first and second objects may be accepted or rejected as a match at any step 400 within the cascade”); determining that the second input image does not match any alternative image representation included in the plurality of alternative image representations (see para [0055]; “The first and second objects may be accepted or rejected as a match at any step 400 within the cascade”); and generating a second prediction that indicates that the second input image is not a member of any output class included in the plurality of output classes (see para [0096]; “Objects are detected in a current frame and the object classification module 210 classifies the objects of the current frame as either a member of one of the object classes (e.g., human, vehicle) or as "unknown”). Regarding claim 20, the scope of claim 20 is fully encompassed by the scope of claim 1, accordingly, the rejection of claim 1 is fully applicable here. Claims 5-6, 8-9, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Chandrashekar et al. in view of Saptharishi et al. as applied in claims above and further in view of Guo et al. NPL “BoolNet: Minimizing the Energy Consumption of Binary Neural Networks”. Regarding claim 5, the rejection of claim 1 is incorporated herein. The combination of Chandrashekar et al. and Saptharishi et al. as a whole does not teach wherein each image representation included in the plurality of image representations comprises a plurality of representative pixel values for a plurality of pixel locations included in a set of images associated with a corresponding output class. In the same field of endeavor, Guo et al. teach wherein each image representation included in the plurality of image representations comprises a plurality of representative pixel values for a plurality of pixel locations included in a set of images associated with a corresponding output class (see Fig. 1 disclose representative pixel values located in the first and second sets of images). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date to the claimed invention to modify the general use of class representative learning model (CRL), that can be especially effective in image classification of Chandrashekar et al. in view of a method of tracking an object captured by a camera system of Saptharishi et al. and minimizing the energy consumption of binary neural networks of Guo et al. in order to adaptively change the zero points of each pixel, in a light-weight manner (see Fig. 1). Regarding claim 6, the rejection of claim 5 is incorporated herein. Guo et al. in the combination further teach wherein each representative pixel value included in the plurality of representative pixel values comprises one or more summary statistics associated with a set of pixel values for a given pixel location associated with the set of images (see page 4, section 3.2.1; “For inference, it utilizes the constant statistic mean and variance instead, which in result can be reformulated as a linear process”). Regarding claim 8, the rejection of claim 1 is incorporated herein. Guo et al. in the combination further teach wherein subsequently determining that the first input image does match the first alternative image representation (see Fig. 1) comprises determining that one or more logical expressions included in the first alternative image representation evaluate to true based on a plurality of pixel values included in the first input image (see page 5 section 3.2.2; “(ii) We utilize the logic operators XNOR and OR for merging the binary features to the consecutive block (instead of 32-bit addition). Based on this novel shortcut design, called Logic Shortcuts, the feature maps in each stage of the network is completely binary without 32-bit operations” Note: the logic operators XNOR indicate to evaluate to true based on x and y values). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date to the claimed invention to modify the general use of class representative learning model (CRL), that can be especially effective in image classification of Chandrashekar et al. in view of a method of tracking an object captured by a camera system of Saptharishi et al.and minimizing the energy consumption of binary neural networks of Guo et al. in order to reduce the memory consumption of the intermediate feature maps (see page 5 section 3.2.2). Regarding claim 9, the rejection of claim 1 is incorporated herein. Guo et al. in the combination further teach wherein the first alternative image representation comprises a disjunction of a first set of pixel values included in a first image associated with the first output class (see Fig 1 disclose the disjunction between the 32 bit operation and binary conv) and a second set of pixel values included in a second image associated with the first output class (see Fig. 2b disclose generating multiple predictions). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date to the claimed invention to modify the general use of class representative learning model (CRL), that can be especially effective in image classification of Chandrashekar et al. in view of a method of tracking an object captured by a camera system of Saptharishi et al. and minimizing the energy consumption of binary neural networks of Guo et al. in order to effectively improve the accuracy (see Fig. 1). Regarding claim 15, the rejection of claim 11 is incorporated herein. Guo et al. in the combination further wherein comparing the first input image with the plurality of image representations comprises computing a deviation of each pixel value included in the first input image from a corresponding representative pixel value included in an image representation (see page 4, sec 3.2.1; “the batch normalization layer normalizes feature maps with an running mean µ and a running variance σ. For inference, it utilizes the constant statistic mean and variance instead, which in result can be reformulated as a linear process”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date to the claimed invention to modify the general use of class representative learning model (CRL), that can be especially effective in image classification of Chandrashekar et al. in view of a method of tracking an object captured by a camera system of Saptharishi et al. and minimizing the energy consumption of binary neural networks of Guo et al. in order to better balance accuracy and efficiency (see page 4, sec 3.2.1). Regarding claim 16, the rejection of claim 15 is incorporated herein. Guo et al. in the combination further teach wherein the corresponding representative pixel value comprises one or more summary statistics associated with a set of pixel values for a pixel location, wherein the one or more summary statistics are generated from a set of images associated with a corresponding class (see page 4, section 3.2.1; “For inference, it utilizes the constant statistic mean and variance instead, which in result can be reformulated as a linear process”). Regarding claim 17, the rejection of claim 11 is incorporated herein. Chandrashekar et al. in the combination further teach wherein subsequently determining that the first input image does match the first alternative image representation comprises: determining the plurality of alternative image representations mapped to the plurality of output classes from a simplified representation of the trained machine learning model; wherein the plurality of alternative image representations includes the first alternative image representation (see Fig. 2 step 2 disclose simplified representation, page 5, right col. 1st para; “Definition 4: Class Representative Feature Space Class Representative Feature Space (CRFS) is a n dimensional semantic feature map in which each of the n dimensions represents the value of a semantic property. These properties may be categorical and contain real-valued data or models from deep learning methods [34]. The Class Representative Feature Space represents n dimensional representative features as a form of the Activation Feature Map (AFM)”). Guo et al. in the combination further teach determining that one or more logical expressions included in the first alternative image representation evaluate to true based on a plurality of pixel values included in the first input image (see Fig. 1b, “ BoolNet. BoolNet uses 1-bit feature maps and logic operations reducing memory requirements and the need for 32-bit operations”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date to the claimed invention to modify the general use of class representative learning model (CRL), that can be especially effective in image classification of Chandrashekar et al. in view of a method of tracking an object captured by a camera system of Saptharishi et al. and minimizing the energy consumption of binary neural networks of Guo et al. in order to reduce the memory consumption of the intermediate feature maps by 32× (see Fig. 1b). Regarding claim 18, the rejection of claim 11 is incorporated herein. Guo et al. in the combination further teach wherein the first alternative image representation comprises a logical expression representing a set of images associated with the first output class (see Fig. 1b, “ BoolNet. BoolNet uses 1-bit feature maps and logic operations reducing memory requirements and the need for 32-bit operations”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date to the claimed invention to modify the general use of class representative learning model (CRL), that can be especially effective in image classification of Chandrashekar et al. in view of a method of tracking an object captured by a camera system of Saptharishi et al. and minimizing the energy consumption of binary neural networks of Guo et al. in order to reduce the memory consumption of the intermediate feature maps by 32× (see Fig. 1b). . Regarding claim 19, the rejection of claim 18 is incorporated herein. Guo et al. in the combination further teach wherein the logical expression comprises one or more conjunctions of a first set of pixel values included in a first image (see page 9, 2nd para; “The Logic Shortcut aggregation is 31× more energy efficient than additive aggregation. Surprisingly, 32-bit PReLU consumes 26% more energy than a binary convolution, Int8 BN consumes about half of a binary convolution, and those two components are commonly used in conjunction with binary convolutions in previous BNNs”) and a disjunction of the first set of pixel values and a second set of pixel values included in a second image (see Fig. 1a disclose first sets pixel values and second sets of pixel values disjunction). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date to the claimed invention to modify the general use of class representative learning model (CRL), that can be especially effective in image classification of Chandrashekar et al. in view of a method of tracking an object captured by a camera system of Saptharishi et al. and minimizing the energy consumption of binary neural networks of Guo et al. in order to have an energy reduction by up to 6× (see page 9, 2nd para). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WINTA GEBRESLASSIE/ Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jun 02, 2022
Application Filed
Jan 15, 2025
Non-Final Rejection — §103
Apr 17, 2025
Response Filed
Aug 12, 2025
Final Rejection — §103
Oct 14, 2025
Response after Non-Final Action
Feb 11, 2026
Examiner Interview Summary
Feb 11, 2026
Applicant Interview (Telephonic)
Feb 13, 2026
Request for Continued Examination
Feb 20, 2026
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579683
IMAGE VIEW ADJUSTMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12573238
BIOMETRIC FACIAL RECOGNITION AND LIVENESS DETECTOR USING AI COMPUTER VISION
2y 5m to grant Granted Mar 10, 2026
Patent 12530768
SYSTEMS AND METHODS FOR IMAGE STORAGE
2y 5m to grant Granted Jan 20, 2026
Patent 12524932
MACHINE LEARNING IMAGE RECONSTRUCTION
2y 5m to grant Granted Jan 13, 2026
Patent 12511861
DETECTION OF ANNOTATED REGIONS OF INTEREST IN IMAGES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+24.7%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 133 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month