Prosecution Insights
Last updated: April 19, 2026
Application No. 17/731,287

Methods and systems to identify the type of a document by matching reference features

Final Rejection §103§112
Filed
Apr 28, 2022
Examiner
PARK, CHAN S
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Idemia Identity & Secutity France
OA Round
4 (Final)
68%
Grant Probability
Favorable
5-6
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
98 granted / 143 resolved
+6.5% vs TC avg
Strong +47% interview lift
Without
With
+46.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
18 currently pending
Career history
161
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
26.0%
-14.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 143 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's amendments filed on 10/1/2025 overcome the following set forth in the previous Office Action: The claims 1-13 and 18-20 being rejected under 35 USC §112 (a)(b) or 35 USC §112 (pre-AIA ), first and second paragraphs. The claim 19 being rejected under 35 USC §103. Applicant's arguments filed 10/1/2025 have been fully considered but they are not persuasive with regard to art rejection. The Office has thoroughly reviewed applicants' arguments but firmly believes that the cited references reasonably and properly met the claim limitations as previously filed, and as to all the arguments on the amended claim limitations the responses will be detailed in the rejection section below. Furthermore, the amendments necessitate new grounds of rejections as to be detailed below. On pages 13-14 of the Remarks, regarding claim 2, applicant argues that “one skilled in the art would have had no reason to modify the values to arrive at the claimed features” with the assumption that “a prima facie case of obviousness” provided in the previous Office Action is “based on optimization of a variable disclosed in a range in the prior art by showing that the claimed variable was not recognized in the prior art to be a result-effective variable.” The requirements for a proper response to a rejection may be found in 37 CFR 1.111(b) and MPEP § 714.02; see also 707.07(a). The remarks do not provide any specific reasons as to why either the findings of fact or the legal conclusion of obviousness is allegedly in error. For example, in the previous Office Action, the “prima facie case of obviousness” is NOT “based on optimization of a variable disclosed in a range in the prior art by showing that the claimed variable was not recognized in the prior art to be a result-effective variable.” In fact, as stated in the previous Office action, “it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to select “150 DPI with 40x40 dimensions and a pitch of 8” or “600 DPI with 100x100 dimensions and a pitch of 20” depending on the application, that is, these are simply design choices. Applicant has not disclosed that selecting “150 DPI with 40x40 dimensions and a pitch of 8” provides an advantage, is used for a particular purpose or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected applicant’s invention to perform equally well with either the selection taught by Kuklinski {modified by Rodriguez and Billinghurst} or the claimed selection because both selection perform the same function of identifying a type of a document by matching a reference feature with a feature of a document region by region using a grid.” This is consistent with the disclosure of the cited prior art references (i.e., those numbers are just examples, see Kuklinski: [0166, 0191], Billinghurst: [0073-0074]) and is also consistent with applicant’s disclosure, (i.e., those numbers are just examples, see applicant’s originally filled specification, e.g., “[83] It should be noted that in the present application, distances, heights, widths, and pitch can be expressed in number of pixels. For example, the portions can have a width of 40 pixels and a height of 40 pixels, and the pitch PI can be of 8 pixels. These dimensions are merely illustrative examples and can be set in accordance with the resolution of the printing method used to obtain the documents and in accordance with the resolution of the camera used to acquire the images of the documents (typically expressed in dots per inch (dpi), for example 150dpi). The dimensions of the portions may also be set in accordance with the size of the symbols visible on the images of documents (such as the star on document DA).”) (emphasis added by examiner). Thus, the remarks in response to the obviousness rejection do not comply with 37 CFR 1.111(b) and MPEP § 714.02. However, Applicant’s reply is considered to be a bona fide attempt at a response and is being accepted as a complete response. References Cited in Prior Art Rejections The following references are cited in the prior art rejections set forth below and are referred to as noted: Kuklinski et al., US 20170132866 A1, published on 2017-05-11, hereinafter Kuklinski. Rodriguez et al., US 20190197642 A1, published on 2019-06-27, hereinafter Rodriguez. Billinghurst et al., US 20040136567 A1, published on 2018-06-08, hereinafter Billinghurst. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kuklinski, in view of Rodriguez, and further in view of Billinghurst. Regarding claim 1, Kuklinski discloses a method for identifying a type of a document (Kuklinski: Figs. 5-10) comprising: obtaining a document image of the document, (Kuklinski: 601 in Fig. 6, Figs. 7-10) processing the document image, obtained by the obtaining, in a Kuklinski: 603-604 in Fig. 6. Figs. 12 and 16. [0153, 0161-0162, 0178]. “[0162] … For illustrative understanding, the feature data shown in FIG. 12 is derived from taking an image of the ID and dividing the image into sub-regions in the form of a grid of Feature Regions (FRs). In the simple case of a 2×3 grid (6 regions), 1201, properties, such as luminance, 1202, standard deviation (SD), 1203, and hue, 1204, can be calculated for each sample. Hence, a Multi-mode FV (MFV) with 18 nodes (3 features from each of 6 regions) is created, 1205. This process is then repeated for all samples in the class.”) obtaining a set of reference features each associated with a document type and a location within the document, (Kuklinski: 503 in Fig. 5. Figs. 12 and 16. [0153, 0161-0162, 0169-0170]. Feature extraction process [0161-0162] is the same for input image and sample images. Reference features = features extracted from document sample images [0155].) matching each reference feature of the set of reference features with a feature of the plurality of features obtained by the processing to identify the document type of the document based on a first pixel position of each reference feature of the set of reference features as compared to a second pixel position of the feature of the plurality of features obtained by the processing to generate a matching score for the reference features, wherein the reference features correspond to at least one marking on the document which enables identification of a document type as predefined graphical element that is invariant across instances of the document type and distinct from personalized information, the reference features being selected in portions that are outside of portions in which information specific to a carrier of the document is visible, (Kuklinski: 605-606 in Fig. 6, Figs. 12, Fig. 18, 19B and 26A-C. [0013-0014, 0140, 0166, 0174-0175, 0177-0178, 0184-0185, 0191, 0195, 0199-0206]. Identify the type of the document = determine document class. “[0013] A self-learning system and methods for automatic document classification, authentication, and information extraction are described. One important type of document is a personal Identification Document (ID) such as a driver's license, but the invention can be applied to many other types of fixed format documents, e.g. currency, forms, permits, and certificates. Given sample(s) of a class of documents (i.e. sample collection), the invention analyzes the collection and automatically chooses the regions and properties of the class that best characterize it and differentiate it from other document classes of the same type.” “[0014] An ID can be considered a member of a Document Class (DC) which can be characterized by its issuer (e.g. Massachusetts—MA, New Hampshire—NH, etc.), date of first issue, type (Driver's License—DL, Identification Card—ID, Commercial Driver's License—CDL), and subtype. The system uses automated detailed image analysis of sample collections for each Document Class (DC) to select the Feature Regions (FRs) and associated classification Feature Properties (FPs) (characteristics such as luminance, chrominance, hue, edge information, 2D-FFT, histograms, geometry, etc.) that are most consistent, while masking out the regions and properties that have a large variance. The resultant ranked set of FRs, with associated FPs for each, comprise the DC Multi-mode Feature Vector (MFV). This MFV is a complete description of the DC.” “[0166] … If a 600 dpi scanner is used to image the ID, each FR for classification would be 100×100 pixels and each Authentication Region (AR) would be 20×20 pixels.” Since the document (such as ID in Figs. 12 and 19B) are scanned, the document image is formed by a pixel matrix. Fig. 19B shows regions of feature (i.e., the claimed pixel position of a feature) compared to regions (such as E1 and F1) of reference features (such as white for class A and black for class B in E1 and F1) (i.e., the claimed pixel position of a reference feature). The “match score” is generated as in, e.g., [0178]. "[0174] … We take the Feature Vector for Class A but only considering features that are not marked as too variable. We also take the Feature Vector for Class B but only considering features that are not marked as too variable. Now only consider the features that are common to both, i.e., those features that are not highly variable in either class." "[0175] ... better separation may come from a property that is not the most invariant in either class, but from a property that is stable in both classes and has the greatest difference score." These properties, represented by Inter-Class Feature Vectors (IFVs), are interpreted as the claimed "at least one marking" differentiating one document type from another document type.) and wherein the first pixel position of each reference feature of the set of reference features has a generated matching score Kuklinski: Figs. 12 and 19B. [0166, 0177-0178, 0184-0185, 0202-0206]. E1 and F1 in Fig. 19B are examples of pixel positions of features or reference features. The “match score” is generated at least as in, e.g., [0178] and implied by feature based comparisons and matches as in [0201-0206]. Here, “a predetermined number of pixels” is interpreted as covering at least the case where the “predetermined number of pixels” is 1.) Although Kuklinski discloses the potential use of a neural network or Support Vector Machine (SVM) (Kuklinski: [0174]), Kuklinski does not disclose explicitly but Rodriguez teaches, in an analogous art of document processing, a neural network configured to receive a document image and to deliver features for a plurality of portions of the document image. (Rodriguez: 704 in Fig. 7. 1008 in Fig. 10. [0087, 0105, 0112-0113, 0122]. “[0105] The method can also include extracting one or more characteristics from the captured image (BLOCK 704). In some implementations, the characteristics are extracted by the authenticator application 212 executing on the client device. In other implementations, the client device can transmit the image to a remote server, e.g., the authenticator server, where the characteristics are extracted by an authentication manager. The extracted characteristics can be any of the characteristics described herein.” “[0112] … The CNN classification manager 900 can mix two digital assets of information. The first digital asset can be the input image, which can have a total of three matrices of pixels. The pixel matrices can correspond to the red, blue, and green color channels. Each pixel in each matrix can be an integer value—for example, between 0 and 255. The second digital asset can be a convolution kernel. The convolution kernel can be a single matrix of floating point numbers where the pattern and the size of the numbers can be a formula for how to intertwine the input image with the kernel in the convolution operation. The output of the kernel is the altered image, which can be referred to as called a feature map.” “[0113] The CNN of the CNN classification manager 900 can include multiple layers of receptive fields. For example, the CNN can include a convolutional layer, a pooling layer, a rectified linear unit layer, a fully connected layer, and a loss layer. The kernels can be a small collection of neurons that process portions of the input image.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kuklinski’s disclosure with Rodriguez’s teachings by combining the method for identifying a type of a document (from Kuklinski) with the technique of processing a document image to output feature maps of the document image by a neural network (from Rodriguez) to yield no more than predictable use of prior art elements according to their established functions since all the claimed elements, which are taught by prior art references, would continue to operate in the same manner, particularly, the method for identifying a type of a document would still work in the way according to Kuklinski and the technique of processing a document image to output feature maps of the document image by a neural network would continue to function as taught by Rodriguez. In fact, the inclusion of Rodriguez's technique of processing a document image to output feature maps of the document image by a neural network would provide a practical and an alternative implementation of the method for identifying a type of a document and thus would enable a better and more flexible method for identifying a type of a document. Kuklinski discloses “match score” when determining the pairwise comparison between a feature and a reference feature (Kuklinski: [0178, 0201-0206, 0278]) and Rodriguez also teaches scoring features to determine the authenticity of the identification document (Rodriguez: [0118, 0123-0124]), which may imply having a generated matching score above a predetermined threshold. However, the combination of Kuklinski with Rodriguez, or Kuklinski {modified by Rodriguez}, does not disclose explicitly to generate a matching score above a predetermined threshold when match features. However, Billinghurst teaches, in an analogous art of image processing involving template matching of image features, generating a matching score above a predetermined threshold when match features. (Billinghurst: “[0098] … The position of the selected features is determined by the template matching process described in 1.1 (typically, a match is found if the NCC score is greater than 0.7) and the radial distortion is corrected.” The template matching is discussed in [0038-0045]. “[0032] … In step 216, the facility selects the best visual features of the surface, such as the best four features. These selected features are sometimes referred to as "point features."”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kuklinski {modified by Rodriguez}’s disclosure with Billinghurst’s teachings by combining the method for identifying a type of a document (from Kuklinski {modified by Rodriguez}) with the technique of generating a matching score above a predetermined threshold when match features (from Billinghurst) to yield no more than predictable use of prior art elements according to their established functions since all the claimed elements, which are taught by prior art references, would continue to operate in the same manner, particularly, the method for identifying a type of a document would still work in the way according to Kuklinski {modified by Rodriguez} and the technique of generating a matching score above a predetermined threshold when match features would continue to function as taught by Billinghurst. In fact, the inclusion of Billinghurst's technique of generating a matching score above a predetermined threshold when match features would provide a practical implementation of the method for identifying a type of a document and thus would enable a better and more effective method for identifying a type of a document due to the practical implementation of template matching made available with Billinghurst’s technique. Therefore, it would have been obvious to combine Kuklinski with Rodriguez and Billinghurst to obtain the invention as specified in claim 1. Regarding claim 2, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, wherein a grid size of the plurality of portions of the document image is Kuklinski: “[0162] … For illustrative understanding, the feature data shown in FIG. 12 is derived from taking an image of the ID and dividing the image into sub-regions in the form of a grid of Feature Regions (FRs). ” “[0166] … Therefore, given allowances for border deterioration, a 12×19 grid (240 regions) would meet this classification requirement and a 5×5 grid within each FR would meet the authentication criteria. If a 600 dpi scanner is used to image the ID, each FR for classification would be 100×100 pixels and each Authentication Region (AR) would be 20×20 pixels.” The 600 dpi resolution can be reduced to 50 dpi, see [0191]. Rodriguez: [0075-0076], grid; [0089-0090], DPI. Billinghurst: [0073-0074], 100 or 200 dpi) Kuklinski {modified by Rodriguez and Billinghurst} does not disclose explicitly that a grid size of the plurality of portions of the document image is 150 DPI, each portion of the plurality of portion being 40x40 pixels and a grid has a pitch of 8. However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to select 150 DPI with each portion being 40x40 pixels and a grid having a pitch of 8 or 600 DPI with each portion being 100x100 pixels and a grid having a pitch of 20 depending on the application, that is, these are simply design choices. Applicant has not disclosed that selecting 150 DPI with each portion being 40x40 pixels and a grid having a pitch of 8 provides an advantage, is used for a particular purpose or solves a stated problem (see paragraph [83] in applicant’s originally filed specification). One of ordinary skill in the art, furthermore, would have expected applicant’s invention to perform equally well with either the selection taught by Kuklinski {modified by Rodriguez and Billinghurst} or the claimed selection because both selection perform the same function of identifying a type of a document by matching a reference feature with a feature of a document region by region using a grid. Therefore, it would have been obvious to one of ordinary skill in this art to modify Kuklinski {modified by Rodriguez and Billinghurst} with different design choices to obtain the invention as specified in claim 2. Regarding claim 3, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, further comprising matching each reference feature with a feature associated with a portion of the document image corresponding to the location of the reference feature or matching each reference feature with a feature associated with a portion of the document image located in the image at the location associated with the reference feature or within a distance from the location associated with the reference feature. (Kuklinski: 605-606 in Fig. 6, Figs. 12, Fig. 18, 19B and 26A-C. [0013-0014, 0140, 0162, 0166, 0174-0175, 0177-0178, 0184-0185, 0191, 0195, 0199-0206].. Rodriguez: [0112-0113]. Billinghurst: [0004, 0032, 0038-0045, 0098].) Regarding claim 4, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, wherein obtaining a document image comprises obtaining an initial image on which a document is visible, detecting the borders of the document in the initial image, extracting the document from the initial image and adjusting the orientation and the resolution of the extracted document to obtain the document image. (Kuklinski: Fig. 8. [0156].) Regarding claim 5, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, wherein the portions of the document image all have the same dimensions. (Kuklinski: Fig. 19B. [0178].) Regarding claim 6, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, wherein the portions of the document image are arranged in accordance with a grid having a given pitch smaller than the width and/or the height of all the portions to provide overlapping coverage in the document image. (Kuklinski: Figs. 12 and 19B. [0162, 0166, 0178, 0185-0186]. As discussed in [0185], the finest geometry (i.e., grid) is at the pixel level, which has “a given pitch smaller than the width and/or the height of all the portions”. Figs. 12 and 19B show two examples of different arrangement of image portions (or regions). See also discussions regarding claim 2.) Regarding claim 7, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, wherein the document is a personal document. (Kuklinski: Figs. 12 and 19B. [0013-0014, 0019].) Regarding claim 8, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, wherein features of the plurality of features and reference features of the set of reference features are vectors having a given length. (Kuklinski: Figs. 12-13 and 19B. [0162-0164, 0178].”[0162] … a Multi-mode FV (MFV) with 18 nodes (3 features from each of 6 regions) is created, 1205. This process is then repeated for all samples in the class.”) Regarding claim 9, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, wherein matching each reference feature of the set of reference features with a feature of the plurality of features further includes computing an individual similarity score for each reference feature, and computing, for each possible type of document, a document similarity score based on individual similarity scores of the reference features associated with a respective type of document, determining the document type of the document based on computed document similarity scores. (Rodriguez: implied by the scoring process in [0123-0124] and 1010 in Fig. 10. See also [0016, 0018, 0026, 0112, 0154, 0173, 0175, 0178, 0180, 0183, 0190, 0199-0207, 0221] for more details. Billinghurst: [0004, 0032, 0038-0045, 0098]. See discussions regarding claim 1.) The reasoning and motivation to combine are similar to those of claim 1. Regarding claim 10, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, further comprising a preliminary training phase of the neural network. (Rodriguez: 1002 in Fig. 10.) The reasoning and motivation to combine are similar to those of claim 1. Regarding claim 11, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, further comprising a preliminary step of enrolling a reference feature comprising: obtaining an image of a reference document, selecting a portion of the image of the reference document to obtain a reference image having a reference location, processing the reference image in the neural network to obtain a reference feature associated with the reference location and the type of the reference document, adding the reference feature to the set of reference features. (Kuklinski: implied by the training process shown in Figs. 5 and 9-11, [0153, 0157-0160]. Rodriguez: 1002 in Fig. 10.) The reasoning and motivation to combine are similar to those of claim 1. Regarding claim 12, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, further comprising a preliminary step of enrolling a reference feature comprising: obtaining an image of a reference document, processing the image of the reference document in the neural network to obtain a plurality of second features, selecting a second feature as the reference feature associated with a location in the image of the reference document. (Kuklinski: implied by the training process shown in Figs. 5 and 9-11, [0153, 0157-0160]. Rodriguez: 1002 in Fig. 10. Any features, including features from Kuklinski and feature maps from Rodriguez, can be interpreted as second features.) Regarding claim 13, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 11, wherein the reference document is a personal document including visible personal information, and obtaining a reference image includes selecting a portion of the image of the reference document distinct from the visible personal information. (Kuklinski: [0013-0014, 0020-0021, 0102]. Variable content in an ID document (i.e., personal information) is masked out.) Claims 14 and 17 are the apparatus and computer readable medium (Kuklinski: [0061-0062, 0152].) claims, respectively, corresponding to and broader in scope than the method claim 1. Therefore, since claims 14 and 17 are broader in scope than claim 1, claims 14 and 17 are rejected at least on the same grounds as claim 1. Regarding claim 15, Kuklinski {modified by Rodriguez and Billinghurst} discloses the system of claim 14, further comprising a camera or a scanner. (Kuklinski: Figs. 7-8. [0155-0156].) Regarding claim 16, Kuklinski {modified by Rodriguez and Billinghurst} discloses the system of claim 15, further comprising a light source configured to light up the document observed by the camera or scanner, wherein the light source emits visible light, infrared light, or ultraviolet light. (Kuklinski: Fig. 7. [0155].) Regarding claim 18, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, wherein the first pixel position of each reference feature of the set of reference features is indicated as matching the second pixel position of the feature of the plurality of features obtained by the processing when the first pixel position is the same as the second pixel position. (Kuklinski: Figs. 12 and 19B. [0166, 0177-0178, 0184-0185, 0202-0206]. E1 and F1 in Fig. 19B are examples of pixel positions of features or reference features. See discussions under claim 1.) Regarding claim 20, Kuklinski {modified by Rodriguez and Billinghurst} discloses the method of claim 1, wherein each of the first pixel position and second pixel position includes a single pixel or a plurality of pixels. (Kuklinski: Figs. 12 and 19B. [0166, 0177-0178, 0185]. Any of regions in Figs. 12 and 19B is an examples of a pixel position and include plurality of pixels. Billinghurst: [0004, 0032, 0038-0045, 0098].) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FENG NIU whose telephone number is (571)272-9592. The examiner can normally be reached on Monday - Friday, 8am-5pm PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached on (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FENG NIU/Primary Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Apr 28, 2022
Application Filed
Nov 06, 2024
Non-Final Rejection — §103, §112
Feb 24, 2025
Interview Requested
Mar 05, 2025
Examiner Interview Summary
Mar 05, 2025
Applicant Interview (Telephonic)
Mar 07, 2025
Response Filed
Mar 19, 2025
Final Rejection — §103, §112
May 28, 2025
Examiner Interview Summary
May 28, 2025
Applicant Interview (Telephonic)
Jun 23, 2025
Request for Continued Examination
Jun 25, 2025
Response after Non-Final Action
Jun 27, 2025
Non-Final Rejection — §103, §112
Oct 01, 2025
Response Filed
Oct 16, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579771
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12555358
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD THEREOF
2y 5m to grant Granted Feb 17, 2026
Patent 12525009
PRUNING A VISION TRANSFORMER
2y 5m to grant Granted Jan 13, 2026
Patent 12488486
METHOD FOR ESTIMATING FLUID SATURATION OF A ROCK
2y 5m to grant Granted Dec 02, 2025
Patent 12482082
SYSTEMS AND METHODS FOR GENERATING ENHANCED OPTHALMIC IMAGES
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+46.9%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 143 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month