Prosecution Insights
Last updated: April 19, 2026
Application No. 18/462,504

SYSTEM AND METHOD FOR SELECTING A GENERATED IMAGE THAT IS REPRESENTATIVE OF AN INCIDENT

Non-Final OA §103
Filed
Sep 07, 2023
Examiner
BROWN, SHEREE N
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Motorola Solutions Inc.
OA Round
3 (Non-Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
481 granted / 738 resolved
+3.2% vs TC avg
Strong +27% interview lift
Without
With
+27.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
34 currently pending
Career history
772
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
25.0%
-15.0% vs TC avg
§102
32.7%
-7.3% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 738 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Application Status This office action is responsive to the amendments filed on 01/29/2026. The previous 35 USC 101 rejection is withdrawn in view of the Applicant’s claim amendments. This action has been made NON-FINAL. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/29/2026 has been entered. Response to Arguments Applicant's arguments filed 01/29/2026 have been fully considered but they are not persuasive. The Applicant alleges the following on page 8 of the remarks: “However, it cannot e determined where that text input is a transcript of any type, let alone a transcript of an emergency call associated with an incident”. The examiner is not persuaded. The examiner asserts the combination of Bakunov and Richardson discloses the Applicant’s claim language. More specifically, Richardson discloses the Applicant’s claim language of “the initial summary of the incident based in part on a transcript of an emergency call associated with the incident” in Richardson’s Paragraphs 0069; 0072. MPEP § 2106 states Office personnel are to give claims their broadest reasonable interpretation in light of the supporting disclosure. In re Morris, 127 F.3d 1048, 1054-55, 44 USPQ2d 1023, 1027-28 (Fed Cir. 1997). Accordingly, the examiner maintains the rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-9, 11-15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bakunov, US20240296535 in view of Richardson, US 20200367040. Claim 1: Bakunov discloses a computer-implemented method (See Bakunov Abstract) but fails to disclose “the initial summary of the incident based in part on a transcript of an emergency call associated with the incident.” This feature is disclosed in paragraphs 0069; 0072 of Richardson. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have further modify Bakunov by the teachings of Richardson to enable improved analyzation of an emergency call by using natural language processing, more effectively (See Richardson Summary of Invention). Additionally, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, artificial intelligence. This close relation between both of the references highly suggests an expectation of success. As modified: The combination of Bakunov and Richardson discloses the following: generating, by a processor, an initial summary (“text” See Bakunov Paragraphs 0020; 0023; 0062-0064; 0096) of an incident (“event” See Bakunov Paragraph 0052) using an artificial intelligence processing tool (“AI Image generation service” See Bakunov Paragraphs 0002; 0023; 0094-0096) the initial summary of the incident based in part on a transcript of an emergency call associated with the incident (See Richardson Paragraphs 0069; 0072); generating, by a processor, at least two images based on the initial summary (“automated image generators may be text-to-image machine learning models” See Bakunov Paragraphs 0020; 0023; 0062-0064; 0096) using an artificial intelligence image generation tool (“AI Image generation service” See Bakunov Paragraphs 0002; 0023; 0094-0096); generating, by a processor, for each of the at least two images, a subsequent summary for each image (“Automated image generators utilizing text-to-image technology, e.g., generators built on diffusion models or Generative Adversarial Networks (GANs), may be able to generate high-fidelity images in response to a user's prompts” See Bakunov Paragraphs 0020) using an artificial intelligence image to text generation tool (“AI Image generation service” See Bakunov Paragraphs 0002; 0023; 0094-0096); comparing, by a processor, the initial summary to each of the subsequent summaries (“At block 620, the first image (the image selected from the first set of images generated by the first automated image generator) and the second image (the image selected from the second set of images generated by the second automated image generator) are automatically compared by the image quality evaluation system 236. Comparisons and rankings (as described below) may be automatically carried out, e.g., by one or more machine learning models or by other computing components” See Bakunov Figures 6A, Item 620; Figure 6B, Item 640; Paragraphs 0135; 0148) using a similarity determination (“The second machine learning model may apply one or more machine learning-based tasks and one or more other tasks, such as automatic rules-based calculations (e.g., a cosine similarity method), to generate the output.” See Bakunov Paragraph 0126) artificial intelligence (“AI Image generation service” See Bakunov Paragraphs 0002; 0023; 0094-0096); and selecting, by a processor, (“a first image from the first set of images and a second image from the second set of images may be automatically selected” See Bakunov Paragraphs 0020-0025), based on the comparison (“At block 620, the first image (the image selected from the first set of images generated by the first automated image generator) and the second image (the image selected from the second set of images generated by the second automated image generator) are automatically compared by the image quality evaluation system 236. Comparisons and rankings (as described below) may be automatically carried out, e.g., by one or more machine learning models or by other computing components” See Bakunov Figures 6A, Item 620; Figure 6B, Item 640; Paragraphs 0135; 0148), the generated image associated with the subsequent summary that is most similar to the initial summary (See Bakunov Paragraphs 0020-0025) as representative of the incident (“event” See Bakunov Paragraph 0052); and associating, by a processor, (See Bakunov Figures 6A, Item 620; Figure 6B, Item 640; Paragraphs 0135; 0148) the selected image with a database record (“The database 304 also stores augmentation data, such as overlays or filters, in an augmentation table 312. The augmentation data is associated with and applied to videos (for which data is stored in a video table 314) and images (for which data is stored in an image table 316).” See Bakunov Paragraph 0073) corresponding to the incident (“event” See Bakunov Paragraph 0052). Claim 3: Bakunov and Richardson discloses generating the at least two images based in part on audio input associated with the incident (See Bakunov Paragraphs 0045; 0052). Claim 4: Bakunov and Richardson discloses generating the at least two images based in part on visual input associated with the incident (See Bakunov Paragraphs 0099). Claim 5: Bakunov and Richardson discloses generating the at least two images based in part on metadata associated with the incident (See Bakunov Paragraphs 0082; 0096; 0113). Claim 6: Bakunov and Richardson further comprising: generating a second initial summary (“text” See Bakunov Paragraphs 0020; 0023; 0062-0064; 0096) of a second incident (“event” See Bakunov Paragraph 0052) using the artificial intelligence processing tool (“AI Image generation service” See Bakunov Paragraphs 0002; 0023; 0094-0096); generating at least two images based on the second initial summary (“automated image generators may be text-to-image machine learning models” See Bakunov Paragraphs 0020; 0023; 0062-0064; 0096) using the artificial intelligence image generation tool (“AI Image generation service” See Bakunov Paragraphs 0002; 0023; 0094-0096); generating, for each of the at least two images (“automated image generators may be text-to-image machine learning models” See Bakunov Paragraphs 0020; 0023; 0062-0064; 0096) based on the second incident (“event” See Bakunov Paragraph 0052), a second subsequent summary for each image using the artificial intelligence image to text generation tool (“AI Image generation service” See Bakunov Paragraphs 0002; 0023; 0094-0096); comparing the second initial summary to each of the second subsequent summaries (See Bakunov Figures 6A, Item 620; Figure 6B, Item 640; Paragraphs 0135; 0148); selecting the generated image (“automated image generators may be text-to-image machine learning models” See Bakunov Paragraphs 0020; 0023; 0062-0064; 0096) associated with the second subsequent summary that is most similar to the second initial summary as representative of the second incident (“event” See Bakunov Paragraph 0052); comparing the selected generated image representative (See Bakunov Figures 6A, Item 620; Figure 6B, Item 640; Paragraphs 0135; 0148) of the incident with the selected generated image representative of the second incident (“event” See Bakunov Paragraph 0052); and determining the incident and the second (“event” See Bakunov Paragraph 0052) incident are related based on the comparing (See Bakunov Figures 6A, Item 620; Figure 6B, Item 640; Paragraphs 0135; 0148). Claim 7: Bakunov and Richardson discloses wherein the initial summary is based on correspondence of two descriptions of the incident (See Bakunov Paragraphs 0052; 0096). Claim 8: Bakunov and Richardson discloses wherein the correspondence is similarities between the two descriptions of the incident (See Bakunov Paragraphs 0052; 0096). Claims 9 and 11-14: Claims 9 and 11-14 are rejected on the same basis as claims 1 and 3-6. Claims 15 and 17-20: Claims 15 and 17-20 are rejected on the same basis as claims 1 and 3-6. Pertinent Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Patent Application Publication No.: 20230081171 includes receiving, by a computing device, a particular textual description of a scene. The method also includes applying a neural network for text-to-image generation to generate an output image rendition of the scene, the neural network having been trained to cause two image renditions associated with a same textual description to attract each other and two image renditions associated with different textual descriptions to repel each other based on mutual information between a plurality of corresponding pairs, wherein the plurality of corresponding pairs comprise an image-to-image pair and a text-to-image pair. The method further includes predicting the output image rendition of the scene. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHEREE N BROWN whose telephone number is (571)272-4229. The examiner can normally be reached M-F 5:30-2:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SAID BROOME can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHEREE N BROWN/Primary Examiner, Art Unit 2612 February 11, 2026
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
Jul 03, 2025
Non-Final Rejection — §103
Sep 18, 2025
Interview Requested
Oct 02, 2025
Applicant Interview (Telephonic)
Oct 02, 2025
Examiner Interview Summary
Oct 08, 2025
Response Filed
Oct 27, 2025
Final Rejection — §103
Jan 29, 2026
Request for Continued Examination
Feb 02, 2026
Response after Non-Final Action
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593956
METHOD FOR BUILDING IMAGE READING MODEL BASED ON CAPSULE ENDOSCOPE, DEVICE, AND MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12573130
METHOD AND SYSTEM PROVIDING TEMPORARY TEXTURE APPLICATION TO ENHANCE 3D MODELING
2y 5m to grant Granted Mar 10, 2026
Patent 12548204
NEURAL FRAME EXTRAPOLATION RENDERING MECHANISM
2y 5m to grant Granted Feb 10, 2026
Patent 12541487
Method for Constructing Database, Method for Retrieving Document and Computer Device
2y 5m to grant Granted Feb 03, 2026
Patent 12541539
METHODS AND SYSTEMS FOR A COMPLIANCE FRAMEWORK DATABASE SCHEMA
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
92%
With Interview (+27.0%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 738 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month