Prosecution Insights
Last updated: April 19, 2026
Application No. 17/548,891

IMAGE SORTING METHOD, DEVICE, ELECTRONIC APPARATUS, AND STORAGE MEDIUM

Final Rejection §103
Filed
Dec 13, 2021
Examiner
WELLS, HEATH E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Lenovo (Beijing) Limited
OA Round
5 (Final)
75%
Grant Probability
Favorable
6-7
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
58 granted / 77 resolved
+13.3% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The reply filed on 5 September 2025 has been entered. Applicant’s arguments with respect to claims 1-4, 6-10, 12-16 and 18-23 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. Applicants are advised to additionally review the additional references in any amendments. Claims 1-4, 6-10, 12-16 and 18-23 are pending in this application and have been considered below. Claims 5, 11 and 17 are canceled by the applicant. Priority Receipt is acknowledged that application claims priority to foreign application with application number CN202110082059.X dated 21 January 2021. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Claim Interpretation Claims 13 and 14 have been amended. The claim interpretation under 35 USC 112 (f) is withdrawn. 1st Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 6-8, 12-14, 18 and 20-23 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2014 0140571 A1, (Elmenhurst et al.). Claim 1 [AltContent: textbox (Elmenhurst et al. Fig. 6, showing the analysis of an image of a letter for areas of interest.)] PNG media_image1.png 376 628 media_image1.png Greyscale Regarding Claim 1, Elmenhurst et al. teach an image sorting method ("The system 100 may be configured to process a set of images of mail pieces," paragraph [0019]) comprising: obtaining a plurality of images that need to be sorted ("The system 100 may be configured to process a set of images of mail pieces," paragraph [0019]); determining a feature identification area ("Each image may be parsed into regions of interest and/or components, and a particular component may be associated with, and/or matched to, one or more lines of text and/or input data fields (e.g. STATE, ZIP, ADDRESSEE NAME)," paragraph [0019]) of an image of the plurality of images ("The system 100 may be configured to process a set of images of mail pieces," paragraph [0019]), content of the feature identification area being used to distinguish different images of the plurality of images, the feature identification area being part of the image ("The OCR system 133 may use the Block-Field-Line Locator 134 to identify a region of interest or address block and subsequently the individual lines within that address block data," paragraph [0019]), including: recognizing an object category shown in the image, the object category representing a category of an object content shown in the image ("The system 100 may take the parsed image data and deduce the allowed patterns in the addresses for that area and/or category," paragraph [0020]); and determining the feature identification area of the image based on positioning information of the feature identification area ("The system 100 may be configured to extract the information from the object (object information) and then categorize the extracted information (categorizing information), for example, as belonging to a predetermined area and/or category," paragraph [0022]) corresponding of the object category of the image, the positioning information being indicative of a preconfigured position of the feature identification area for the object category as a coordinate range relative to an image frame ("FIG. 9 illustrates an example image of a mail piece 910 and a coordinate system 900 for providing, determining, identifying, and/or generating a characterization of one or more mail pieces," paragraph [0057]), the positioning information of the feature identification area being predetermined according to the object category ("For example, the destination address 930 may be associated with a first set of coordinates, the return address 940 may be associated with a second set of coordinates, and/or the postage indicia 950 may be associated with a third set of coordinates," paragraph [0058]); recognizing the content of the feature identification area of each of the images in sequence to obtain a feature content of the feature identification area of each of the images, ("Document Fingerprint" may be determined for each document, such as a mail piece, based on one or more elements, such as those described with reference to FI GS. 3 to 9," paragraph [0060])comprising performing optical character recognition (OCR) on the feature identification area of each of the images as indicated by the positioning information(""Document Fingerprint" may be determined for each document, such as a mail piece, based on one or more elements, such as those described with reference to FI GS. 3 to 9," paragraph [0019]); and sorting the plurality of images based on the feature content of the feature identification area of each of the images ("Objects to be analyzed, identified, sorted, delivered, or classified may be fed into the system 100 at the object infeed 140 before being processed and ultimately removed at the exit 150 or as sortation completes," paragraph [0017]). It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary embodiments, because Elmenhurst et al. explicitly motivates doing so at least in paragraphs [0062] and [0093] including “Whereas the specification repeatedly provides examples identifying a mail piece or mail pieces, the systems, methods, processes, and operations described herein may also be used to analyze or compare other types of documents, files, forms, contracts, letters, or records associated with insurance, medical, dental, passports, tax, accounting, etc.” and otherwise motivating experimentation and optimization. The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of non-transitory computer readable storage medium claim 7 and apparatus claim 13 while noting that the rejection above cites to both device and method disclosures. Claims 7 and 13 are mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 2 Regarding claim 2, Elmenhurst et al. teach the method of claim 1, further comprising, before sorting the plurality of images based on the feature content of the feature identification area of each of the images: recognizing a content category, to which the content of the feature identification area of the image belongs, the content category representing a data representation form of the content of the feature identification area ("The OCR system 133 may use the Block-Field-Line Locator 134 to identify a region of interest or address block and subsequently the individual lines within that address block data," paragraph [0019] where an address block is a content category); wherein sorting the plurality of images based on the feature content of the feature identification area of each of the images includes: sorting the plurality of images according to a sorting method of a plurality of feature content corresponding to the content category and in connection with the feature content of the feature identification area of each of the plurality of images ("At operation 226, the mail piece may be sorted using the printed barcode for subsequent delivery at operation 228," paragraph [0034]). Claim 6 Regarding claim 6, Elmenhurst et al. teach the method of claim 1, further comprising, after obtaining the plurality of images that need to be sorted: determining whether content modules and arrangements of content modules of the plurality of images are same, an image including content of at least one content module ("For example, it can be determined that the bottom-most line ( e.g., as detected by a parser) has the rightward- most entity labeled "ZIP-5'', the one to the left of that labeled "STATE" and the remaining, leftward-most entity labeled "CITY"," paragraph [0020]); and in response to the content modules and the arrangements of the content modules of the plurality of images are not same, outputting a prompt to a user, the prompt being used to remind the user that images of different categories exist ("If the mail piece image does not comprise an address, or if the address cannot be identified from the image, the mail piece may be rejected for manual sorting at operation 236 prior to delivery at operation 228," paragraph [0029] where rejection for manual sorting is a prompt). Claim 7 Regarding claim 7, Elmenhurst et al. teach a non-transitory computer readable storage medium storing computer program instructions, when executed by a processor, the computer program instructions implementing the image sorting method ("The system 100 may be configured to process a set of images of mail pieces," paragraph [0019]) comprising: obtaining a plurality of images that need to be sorted ("The system 100 may be configured to process a set of images of mail pieces," paragraph [0019]); determining a feature identification area ("Each image may be parsed into regions of interest and/or components, and a particular component may be associated with, and/or matched to, one or more lines of text and/or input data fields (e.g. STATE, ZIP, ADDRESSEE NAME)," paragraph [0019]) of an image of the plurality of images ("The system 100 may be configured to process a set of images of mail pieces," paragraph [0019]), content of the feature identification area being used to distinguish different images of the plurality of images, the feature identification area being part of the image ("The OCR system 133 may use the Block-Field-Line Locator 134 to identify a region of interest or address block and subsequently the individual lines within that address block data," paragraph [0019]), including: recognizing an object category shown in the image, the object category representing a category of an object content shown in the image ("The system 100 may take the parsed image data and deduce the allowed patterns in the addresses for that area and/or category," paragraph [0020]); and determining the feature identification area of the image based on positioning information of the feature identification area ("The system 100 may be configured to extract the information from the object (object information) and then categorize the extracted information (categorizing information), for example, as belonging to a predetermined area and/or category," paragraph [0022]) corresponding of the object category of the image, the positioning information being indicative of a preconfigured position of the feature identification area for the object category as a coordinate range relative to an image frame, ("FIG. 9 illustrates an example image of a mail piece 910 and a coordinate system 900 for providing, determining, identifying, and/or generating a characterization of one or more mail pieces," paragraph [0057]) the positioning information of the feature identification area being predetermined according to the object category ("For example, the destination address 930 may be associated with a first set of coordinates, the return address 940 may be associated with a second set of coordinates, and/or the postage indicia 950 may be associated with a third set of coordinates," paragraph [0058]); recognizing the content of the feature identification area of each of the images in sequence to obtain a feature content of the feature identification area of each of the images, ("Document Fingerprint" may be determined for each document, such as a mail piece, based on one or more elements, such as those described with reference to FI GS. 3 to 9," paragraph [0060])comprising performing optical character recognition (OCR) on the feature identification area of each of the images as indicated by the positioning information (""Document Fingerprint" may be determined for each document, such as a mail piece, based on one or more elements, such as those described with reference to FI GS. 3 to 9," paragraph [0019]); and sorting the plurality of images based on the feature content of the feature identification area of each of the images ("Objects to be analyzed, identified, sorted, delivered, or classified may be fed into the system 100 at the object infeed 140 before being processed and ultimately removed at the exit 150 or as sortation completes," paragraph [0017]). Claim 8 Regarding claim 8, Elmenhurst et al. teach the non-transitory computer readable storage medium of claim 7, wherein the image sorting method further includes, before sorting the plurality of images based on the feature content of the feature identification area of each of the images: recognizing a content category, to which the content of the feature identification area of the image belongs, the content category representing a data representation form of the content of the feature identification area ("The OCR system 133 may use the Block-Field-Line Locator 134 to identify a region of interest or address block and subsequently the individual lines within that address block data," paragraph [0019] where an address block is a content category); wherein sorting the plurality of images based on the feature content of the feature identification area of each of the images includes: sorting the plurality of images according to a sorting method of a plurality of feature content corresponding to the content category and in connection with the feature content of the feature identification area of each of the plurality of images ("At operation 226, the mail piece may be sorted using the printed barcode for subsequent delivery at operation 228," paragraph [0034]). Claim 12 Regarding claim 12, Elmenhurst et al. teach the non-transitory computer readable storage medium of claim 7, wherein the image sorting method further includes, after obtaining the plurality of images that need to be sorted: determining whether content modules and arrangements of content modules of the plurality of images are same, an image including content of at least one content module ("For example, it can be determined that the bottom-most line ( e.g., as detected by a parser) has the rightward- most entity labeled "ZIP-5'', the one to the left of that labeled "STATE" and the remaining, leftward-most entity labeled "CITY"," paragraph [0020]); and in response to the content modules and the arrangements of the content modules of the plurality of images are not same, outputting a prompt to a user, the prompt being used to remind the user that images of different categories exist ("If the mail piece image does not comprise an address, or if the address cannot be identified from the image, the mail piece may be rejected for manual sorting at operation 236 prior to delivery at operation 228," paragraph [0029] where rejection for manual sorting is a prompt). Claim 13 Regarding claim 13, Elmenhurst et al. teach an image sorting device comprising: a processor ("use dedicated processor systems, micro controllers, programmable logic devices, or microprocessors that may perform some or all of the operations described herein," paragraph [0095]); and a memory coupled to the processor, ("processing device may execute instructions or "code" stored in memory," paragraph [0096])the memory storing instructions which, when executed by the processor, cause the processor to: obtain a plurality of images that need to be sorted ("The system 100 may be configured to process a set of images of mail pieces," paragraph [0019]); determine a feature identification area ("Each image may be parsed into regions of interest and/or components, and a particular component may be associated with, and/or matched to, one or more lines of text and/or input data fields (e.g. STATE, ZIP, ADDRESSEE NAME)," paragraph [0019]) of an image of the plurality of images ("The system 100 may be configured to process a set of images of mail pieces," paragraph [0019]), content of the feature identification area being used to distinguish different images of the plurality of images, the feature identification area being part of the image ("The OCR system 133 may use the Block-Field-Line Locator 134 to identify a region of interest or address block and subsequently the individual lines within that address block data," paragraph [0019]), including: recognizing an object category shown in the image, the object category representing a category of an object content shown in the image ("The system 100 may take the parsed image data and deduce the allowed patterns in the addresses for that area and/or category," paragraph [0020]); and determining the feature identification area of the image based on positioning information of the feature identification area ("The system 100 may be configured to extract the information from the object (object information) and then categorize the extracted information (categorizing information), for example, as belonging to a predetermined area and/or category," paragraph [0022]) corresponding of the object category of the image, the positioning information being indicative of a preconfigured position of the feature identification area for the object category as a coordinate range relative to an image frame ("FIG. 9 illustrates an example image of a mail piece 910 and a coordinate system 900 for providing, determining, identifying, and/or generating a characterization of one or more mail pieces," paragraph [0057]), the positioning information of the feature identification area being predetermined according to the object category ("For example, the destination address 930 may be associated with a first set of coordinates, the return address 940 may be associated with a second set of coordinates, and/or the postage indicia 950 may be associated with a third set of coordinates," paragraph [0058]); recognize the content of the feature identification area of each of the images in sequence to obtain a feature content of the feature identification area of each of the images, ("Document Fingerprint" may be determined for each document, such as a mail piece, based on one or more elements, such as those described with reference to FI GS. 3 to 9," paragraph [0060])comprising performing optical character recognition (OCR) on the feature identification area of each of the images as indicated by the positioning information (""Document Fingerprint" may be determined for each document, such as a mail piece, based on one or more elements, such as those described with reference to FI GS. 3 to 9," paragraph [0019]); and sort the plurality of images based on the feature content of the feature identification area of each of the images ("Objects to be analyzed, identified, sorted, delivered, or classified may be fed into the system 100 at the object infeed 140 before being processed and ultimately removed at the exit 150 or as sortation completes," paragraph [0017]). Claim 14 Regarding claim 14, Elmenhurst et al. teach the device of claim 13, wherein the processor is further to: before sorting the plurality of images, recognize a content category, to which the content of the feature identification area of the image belongs, the content category representing a data representation form of the content of the feature identification area ("The OCR system 133 may use the Block-Field-Line Locator 134 to identify a region of interest or address block and subsequently the individual lines within that address block data," paragraph [0019] where an address block is a content category); wherein to sort the plurality of images based on the feature content of the feature identification area of each of the images, the processor is further to: sort the plurality of images according to a sorting method of a plurality of feature content corresponding to the content category and in connection with the feature content of the feature identification area of each of the plurality of images ("At operation 226, the mail piece may be sorted using the printed barcode for subsequent delivery at operation 228," paragraph [0034]). Claim 18 Regarding claim 18, Elmenhurst et al. teach the device of claim 13, further comprising, after obtaining the plurality of images that need to be sorted, the processor is further to: determine whether content modules and arrangements of content modules of the plurality of images are same, an image including content of at least one content module ("For example, it can be determined that the bottom-most line ( e.g., as detected by a parser) has the rightward- most entity labeled "ZIP-5'', the one to the left of that labeled "STATE" and the remaining, leftward-most entity labeled "CITY"," paragraph [0020]); and in response to the content modules and the arrangements of the content modules of the plurality of images are not same, output a prompt to a user, the prompt being used to remind the user that images of different categories exist ("If the mail piece image does not comprise an address, or if the address cannot be identified from the image, the mail piece may be rejected for manual sorting at operation 236 prior to delivery at operation 228," paragraph [0029] where rejection for manual sorting is a prompt). Claim 20 Regarding claim 20, Elmenhurst et al. teach the method of claim 1, wherein: different object categories correspond to different feature identification areas of the images ("The system 100 may be configured to extract the information from the object (object information) and then categorize the extracted information (categorizing information), for example, as belonging to a predetermined area and/or category," paragraph [0022]), and each feature identification area is at a preconfigured position of a corresponding image ("FIG. 9 illustrates an example image of a mail piece 910 and a coordinate system 900 for providing, determining, identifying, and/or generating a characterization of one or more mail pieces," paragraph [0057]). Claim 21 Regarding claim 21, Elmenhurst et al. teach the method of claim 1, wherein: the object category of the object in the image includes one or more of an invoice, a test paper, or an article ("Whereas the specification repeatedly provides examples identifying a mail piece or mail pieces, the systems, methods, processes, and operations described herein may also be used to analyze or compare other types of documents, files, forms, contracts, letters, or records associated with insurance, medical, dental, passports, tax, accounting, etc," paragraph [0093]). Claim 22 Regarding claim 22, Elmenhurst et al. teach the method of claim 1, wherein sorting the plurality of images based on the feature content of the feature identification area of each of the images includes: in response to the feature content including a number ("For a manufacturing plant or parts depot, this may be a label or serial number which identifies a part or otherwise associates information with the part," paragraph [0021]), sorting the plurality of images based on the feature content corresponding to the images according to a sorting rule of numerical values in ascending order ("Part of the defined pattern may include information on how to apply the pattern either alone or in a defined and prioritized order with other defined patterns, and what generic and specific information to return," paragraph [0023] where the broadest reasonable interpretation of ascending order includes a defined and prioritized order.). Claim 23 Regarding claim 23, Elmenhurst et al. teach the method according to claim 1, further comprising: determining an area distribution pattern corresponding to the plurality of images ("A defined pattern or set of patterns associated with the object information and/or the categorizing information may exist a priori (e.g. a Universal Postal Union-defined address format for each country), or it may be defined for a specific application by a vendor or by a customer," paragraph [0023]); and determining a position range of the feature identification area in the area distribution pattern and then matching feature identification areas corresponding to the position range in the plurality of images according to the position range of the area distribution pattern ("The system may provide for a tolerance or range of variation in image dimensions for the associated images 310, 320 of the mail piece, for example, to account for differences in scanning devices, rates of transport (scanning speeds), alignment and/or skewing of the mail piece, damage to the mail piece, additional markings made on the mail piece, or any combination thereof," paragraph [0041]). 2nd Claim Rejections - 35 USC § 103 Claims 3, 9, 15 and 19 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2014 0140571 A1, (Elmenhurst et al.) in view of US Patent Publication 2018 0268253 A1, (Hoffman et al.). Claim 3 Regarding Claim 3, Elmenhurst et al. teach the method of claim 1, as noted above. PNG media_image2.png 420 576 media_image2.png Greyscale [AltContent: textbox (Hoffman et al. Fig. 15, showing analysis of a PowerPoint slide for information.)]Elmenhurst et al. do not explicitly teach all of a sorting method selected by a user. However, Hoffman et al. teach wherein sorting the plurality of images based on the feature content of the feature identification area of each of the images includes: obtaining a sorting method selected by a user ("Identifying slides or pages similar to a given query slide or page, and decks or documents similar to a given query deck or document. This is useful when a user has already found a relevant slide or deck ( or page or document) and is interested in exploring semantically and visually similar variations," paragraph [0039]); according to the sorting method, determining an order of a feature content of the feature identification area of each of the plurality of images ("Assisting users in organizing their content into categories by displaying semantically and visually similar content," paragraph [0042]); and determining an order of the plurality of images based on the order of the feature content of the feature identification area of each of the plurality of images ("Applying analysis to a wide range of features extracted," paragraph [0042] where a wide range of features means the features must be analyzed in an order of a feature content). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Document Fingerprinting” as taught by Elmenhurst et al. to use “Systems and Methods for Identifying Semantically and Visually Related Content” as taught by Hoffman et al. The suggestion/motivation for doing so would have been that, “However, document management systems track usage and other statistics related to the two copies separately even though they contain the same information. Keeping separate metrics for these two portions of content dilutes the quality of metrics, which can be made even worse each time the content is copied or a new version is created.” as noted by the Hoffman et al. disclosure in paragraph [0002]. Claim 9 Regarding claim 9, Elmenhurst et al. teach the non-transitory computer readable storage medium of claim 7, as noted above. Elmenhurst et al. do not explicitly teach all of obtaining a sorting method selected by a user. However, Hoffman et al. teach wherein sorting the plurality of images based on the feature content of the feature identification area of each of the images includes: obtaining a sorting method selected by a user ("Identifying slides or pages similar to a given query slide or page, and decks or documents similar to a given query deck or document. This is useful when a user has already found a relevant slide or deck ( or page or document) and is interested in exploring semantically and visually similar variations," paragraph [0039]); according to the sorting method, determining an order of a feature content of the feature identification area of each of the plurality of images ("Assisting users in organizing their content into categories by displaying semantically and visually similar content," paragraph [0042]); and determining an order of the plurality of images based on the order of the feature content of the feature identification area of each of the plurality of images ("Applying analysis to a wide range of features extracted," paragraph [0042] where a wide range of features means the features must be analyzed in an order of a feature content). Elmenhurst et al. and Hoffman et al. are combined as per claim 3. Claim 15 Regarding claim 15, Elmenhurst et al. teach the device of claim 13, as noted above. Elmenhurst et al. do not explicitly teach all of obtain a sorting method selected by a user. However, Hoffman et al. teach wherein to sort the plurality of images based on the feature content of the feature identification area of each of the images, the processor is further to: obtain a sorting method selected by a user ("Identifying slides or pages similar to a given query slide or page, and decks or documents similar to a given query deck or document. This is useful when a user has already found a relevant slide or deck ( or page or document) and is interested in exploring semantically and visually similar variations," paragraph [0039]); according to the sorting method, determine an order of a feature content of the feature identification area of each of the plurality of images ("Assisting users in organizing their content into categories by displaying semantically and visually similar content," paragraph [0042]); and determine an order of the plurality of images based on the order of the feature content of the feature identification area of each of the plurality of images ("Applying analysis to a wide range of features extracted," paragraph [0042] where a wide range of features means the features must be analyzed in an order of a feature content). Elmenhurst et al. and Hoffman et al. are combined as per claim 3. Claim 19 Regarding claim 19, Elmenhurst et al. teach the method of claim 1, as noted above. Elmenhurst et al. do not explicitly teach all of selecting one image randomly. However, Hoffman et al. teach wherein sorting the plurality of images based on the feature content of the feature identification area of each of the images includes: in response to the plurality of images being in a same object category, selecting one image randomly from the plurality of images to determine the object category of the image, the plurality of images belonging to the object category ("An alternative way to compute the clusters is to use a technique like k-means clustering, which iteratively assigns data points to a cluster centroid and moves the centroids to better fit the data. One of ordinary skill in the art will recognize that other clustering methods may be employed," paragraph [0076]); and in response to the plurality of images being in different object categories, classifying the plurality of images according to content arrangement of content in the plurality of images into image categories ("One such criterion is a threshold on the similarity function defined above. In some embodiments, the clustering method computes many clusters at different similarity thresholds and stores indications of these clusters which can later be used to aggregate performance metrics and enable the interactive user experience," paragraph [0077] where similarity thresholds teaches being in different categories), and selecting one image for each image category to determine the object category ("FIG. 3 is a display page 300 showing a report representative of the performance of a set of slides, grouped together into "families." In this example, each row represents one cluster of similar slides, called a "Slide Family," with the most commonly used slide one shown as a thumbnail," paragraph [0053] where a thumbnail is an image). Elmenhurst et al. and Hoffman et al. are combined as per claim 3. 3rd Claim Rejections - 35 USC § 103 Claims 4, 10 and 16 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2014 0140571 A1, (Elmenhurst et al.) and US Patent Publication 2018 0268253 A1, (Hoffman et al.) in view of US Patent Publication 2010 0169309 A1, (Barrett et al.). Claim 4 Regarding Claim 4, Elmenhurst et al. teach the method of claim 1, as noted above. PNG media_image3.png 659 518 media_image3.png Greyscale Elmenhurst et al. and Hoffman et al. do not explicitly teach all of a target image selected by a user. However, Barrett et al. teach wherein determining the feature identification area of the image includes: [AltContent: textbox (Barrett et al. Fig. 5, showing analysis of a user selected area.)]determining a target image selected by a user from the plurality of images ("the interface 251 includes a selector box 253 that allows the user to select a corpus from a list of Corpora (block201)," paragraph [0037] where the corpus is the target image selected from the corpora); determining the selected feature identification area in the target image based on an input operation of the user on the target image ("a selector box 255 that allows the user to select one or more particular RST relation types from a list of RST relation types supported by the system (block 205A). A similar selector box (not shown) can be used to allow the user to select one or more particular Speech Act relation types from a list of Speech Act relation types supported by the system (block 205B)," paragraph [0037]); and recognizing feature identification areas of images other than the target image ("It is contemplated that the view can be expanded to present the related segments for adjacent nodes (segments) of the hierarchical tree structure and provide for linking to the document context for such nodes in a manner similar to that described above with respect to FIGS. 5B2 and 5B3," paragraph [0044] where such nodes are images other that the target image) in the plurality of images according to a position range of the feature identification area of the target image in the target image ("The Document Segment table 507 includes a segment ID, data that indicates the start and end of the Document Segment, and a Document ID for each Document Segment maintained by the relational database," paragraph [0046]). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Document Fingerprinting” as taught by Elmenhurst et al. and “Systems and Methods for Identifying Semantically and Visually Related Content” as taught by Hoffman et al. to use “System, Method and Apparatus for Information Extraction of Textual Documents” as taught by Barrett et al. The suggestion/motivation for doing so would have been that, “There are many domains where users search a large number of text documents and/or one or more large text documents for content of interest. Such domains include legal research and analysis as well as scientific research and analysis.” as noted by the Barrett et al. disclosure in paragraph [0002]. Claim 10 Regarding claim 10, Elmenhurst et al. teach the non-transitory computer readable storage medium of claim 7, as noted above. Elmenhurst et al. and Hoffman et al. do not explicitly teach all of determining a target image selected by a user from the plurality of images. However, Barrett et al. teach wherein determining the feature identification area of the image includes: determining a target image selected by a user from the plurality of images ("the interface 251 includes a selector box 253 that allows the user to select a corpus from a list of Corpora (block201)," paragraph [0037] where the corpus is the target image selected from the corpora); determining the selected feature identification area in the target image based on an input operation of the user on the target image ("a selector box 255 that allows the user to select one or more particular RST relation types from a list of RST relation types supported by the system (block 205A). A similar selector box (not shown) can be used to allow the user to select one or more particular Speech Act relation types from a list of Speech Act relation types supported by the system (block 205B)," paragraph [0037]); and recognizing feature identification areas of images other than the target image (("It is contemplated that the view can be expanded to present the related segments for adjacent nodes (segments) of the hierarchical tree structure and provide for linking to the document context for such nodes in a manner similar to that described above with respect to FIGS. 5B2 and 5B3," paragraph [0044] where such nodes are images other that the target image) in the plurality of images according to a position range of the feature identification area of the target image in the target image ("The Document Segment table 507 includes a segment ID, data that indicates the start and end of the Document Segment, and a Document ID for each Document Segment maintained by the relational database," paragraph [0046]). Elmenhurst et al., Hoffman et al. and Barrett et al. are combined as per claim 4. Claim 16 Regarding claim 16, Elmenhurst et al. teach the device of claim 13, as noted above. Elmenhurst et al. and Hoffman et al. do not explicitly teach all of determining a target image selected by a user from the plurality of images. However, Barrett et al. teach wherein to determine the feature identification area of the image, the processor is further to: determine a target image selected by a user from the plurality of images ("the interface 251 includes a selector box 253 that allows the user to select a corpus from a list of Corpora (block201)," paragraph [0037] where the corpus is the target image selected from the corpora); determine the selected feature identification area in the target image based on an input operation of the user on the target image ("a selector box 255 that allows the user to select one or more particular RST relation types from a list of RST relation types supported by the system (block 205A). A similar selector box (not shown) can be used to allow the user to select one or more particular Speech Act relation types from a list of Speech Act relation types supported by the system (block 205B)," paragraph [0037]); and recognize feature identification areas of images other than the target image ("It is contemplated that the view can be expanded to present the related segments for adjacent nodes (segments) of the hierarchical tree structure and provide for linking to the document context for such nodes in a manner similar to that described above with respect to FIGS. 5B2 and 5B3," paragraph [0044] where such nodes are images other that the target image) in the plurality of images according to a position range of the feature identification area of the target image in the target image ("The Document Segment table 507 includes a segment ID, data that indicates the start and end of the Document Segment, and a Document ID for each Document Segment maintained by the relational database," paragraph [0046]). Elmenhurst et al., Hoffman et al. and Barrett et al. are combined as per claim 4. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2022 0392047 A1 to Wheaton et al. discloses extracting contextually structured data from document images, such as by automatically identifying document layout, document data, document metadata, and/or correlations therebetween in a document image, for instance. Some embodiments utilize breakpoints to enable the system to match different documents with internal variations to a common template. Several embodiments include extracting contextually structured data from table images, such as gridded and non-gridded tables. Many embodiments are directed to generating and utilizing a document template database for automatically extracting document image contents into a contextually structured format. [AltContent: textbox (Onischuk, Fig. 1, showing an analysis of a voting form)] PNG media_image4.png 527 490 media_image4.png Greyscale US Patent Publication 2015 0012339 A1 to Onischuk discloses document timely, valid, authentic-voter information, selections, write-in choices, personal security items are machine read via software template, read data correlated to document RSID, data stored, published to Voter privately accessible data vault. Adequately completed documents are tallied for certifying. Official computers running artificial intelligence programs manage data (security, processing, integrity), communications, devices: availability, workload allocation. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.E.W./Examiner, Art Unit 2664 Date: 27 October 2025 /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Dec 13, 2021
Application Filed
Feb 27, 2024
Non-Final Rejection — §103
May 22, 2024
Applicant Interview (Telephonic)
May 22, 2024
Examiner Interview Summary
May 24, 2024
Response Filed
Jul 17, 2024
Final Rejection — §103
Oct 23, 2024
Request for Continued Examination
Oct 26, 2024
Response after Non-Final Action
Feb 25, 2025
Non-Final Rejection — §103
Jun 02, 2025
Non-Final Rejection — §103
Sep 05, 2025
Response Filed
Oct 27, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602755
DEEP LEARNING-BASED HIGH RESOLUTION IMAGE INPAINTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597226
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
2y 5m to grant Granted Apr 07, 2026
Patent 12591979
IMAGE GENERATION METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588876
TARGET AREA DETERMINATION METHOD AND MEDICAL IMAGING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586363
GENERATION OF PLURAL IMAGES HAVING M-BIT DEPTH PER PIXEL BY CLIPPING M-BIT SEGMENTS FROM MUTUALLY DIFFERENT POSITIONS IN IMAGE HAVING N-BIT DEPTH PER PIXEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month