Prosecution Insights
Last updated: April 19, 2026
Application No. 18/765,524

SYSTEM AND METHOD FOR GENERATING AN IMAGE USING A GENERATIVE ARTIFICIAL INTELLIGENCE

Non-Final OA §103
Filed
Jul 08, 2024
Examiner
HOANG, PETER
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Motorola Solutions Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
435 granted / 539 resolved
+18.7% vs TC avg
Moderate +12% lift
Without
With
+11.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
12 currently pending
Career history
551
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 539 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 16-20 are objected to because of the following informalities: Claim 16-20 claim, “The medium of…” but examiner recommends clarifying that the medium is a “non-transitory processor readable medium,” for consistency. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 8-10, 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brooks et al. (“InstructPiox2Pix: Learning to Follow Image Editing Instructions”) in view of Zhang et al. (“TIE: Revolutionizing Text-based Image Editing for Complex-Prompt Following and High-Fidelity Editing”). Re claim 1, Brooks teaches a method of generating an image using a generative Artificial Intelligence (AI) comprising: receiving a prompt to generate the image, the prompt including factual information (see p. 18392, in reference to Fig. 1, wherein a prompt takes a given image and instruction, wherein the input image is the factual information) and (see p. 18399, in reference to Fig. 11-12, wherein an input image on the left is an original input image (factual information prompted along with text instruction) generating a first portion of the image, the first portion of the image based on the factual information included in the prompt, the first portion of the image having a first stylization (see Fig. 11, wherein each subsequent edit contains at least a first portion of the image that is based on the inputted factual information, the first portion as a real style). generating a second portion of the image, the second portion of the image based on information auto generated by the generative AI, the second portion of the image having a second stylization (see Fig. 12, wherein each subsequent edit contains at least a second portion of the image based on generative AI, having a second stylization such as an eerie thunderstorm and an inserted train). and displaying the image with the first and second stylizations (see Fig. 12, wherein the images generated contain factual portions and AI generated portions). Brooks does not explicitly teach wherein the first and second stylizations indicate portions of the image that were based on factual information and portions of the image that were based on the generative AI auto generated information. However, Zhang teaches wherein the first and second stylizations indicate portions of the image that were based on factual information and portions of the image that were based on the generative AI auto generated information (see Fig. 1-2, 5-7, wherein areas of an image are indicated and masked off as part of the factual source image and portions that are based on the generative AI auto generated information are indicated by an editing mask, such as removal of items or for example, Fig. 7 shows an editing mask to put a black hat on the left man on the bus). Brooks and Zhang teaches claim 1. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Brooks’s system of generating an image using AI to explicitly include indication portions of the image that are based on generative AI auto generated information, as taught by Zhang, as the references are in the analogous art of generative AI to generate images via prompt. An advantage of the modification is that it achieves the result of explicitly indicating portions of an image that are factual and portions that are generated by AI to discern which parts of the image are real/fake. Re claim 2, Brooks and Zhang teaches claim 1. Furthermore, Brooks teaches receiving the prompt, modified by a user, that also includes a user modified information, generating a third portion of the image, the third portion of the image based on the user modified information, the third portion of the image having a third stylization, and displaying the image with the third stylization (see Fig. 11, wherein each subsequent compounded edit (a third portion and third stylization based on user prompt) modifies the image with a portion of the image having a third stylization, such as turning image into an oil pastel drawing or a dark creepy vibe, for display). Re claim 3, Brooks and Zhang teaches claim 1. Furthermore, Brooks teaches wherein the first stylization is a realistic graphics style and the second stylization is a non-realistic graphics style (see Fig. 13, wherein realistic styles such as an image of the Eiffel Tower includes second non-realistic style moved to Mars). Claims 8-10 claim limitations in scope to claims 1-3, respectively, and are rejected for at least the reasons above. Claims 15-16 claim limitations in scope to claims 1-2, respectively, and are rejected for at least the reasons above. Claim(s) 4, 11, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brooks et al. (“InstructPiox2Pix: Learning to Follow Image Editing Instructions”) in view of Zhang et al. (“TIE: Revolutionizing Text-based Image Editing for Complex-Prompt Following and High-Fidelity Editing”) and Muscolino et al. (US 20230237708). Re claim 4, Brooks and Zhang teaches claim 1. Brooks and Zhang do not explicitly teach displaying the first portion of the image as a first layer; and displaying the second portion of the image as a second layer, wherein each layer can be independently displayed. However, Muscolino teaches displaying the first portion of the image as a first layer; and displaying the second portion of the image as a second layer, wherein each layer can be independently displayed ([0033] Using this information, the graphic design system automatically generates semantic layers based on the user's interaction with the drawing. This enables the graphic design system to organize the drawing for the user, without requiring additional input. This can include grouping or consolidating layers of similar content, arranging the layers in an appropriate z-order based on the content of the layers, automatically sizing content as it is added based on its location within the drawing and any adjacent content or layers, etc. Additionally, from this semantic context, the graphic design system can apply content linting rules. Similar to traditional linting, where code is analyzed for programmatic or stylistic errors, content linting determines spatial, semantic, or other inconsistencies between layers and/or content of the drawing based on the semantic context of the drawing), ([0034] Additionally, the semantic information can be used to hierarchically organize the drawing. For example, a scene graph represents the layers of the drawing and how those layers are related. Using the semantic information, these relationships can include semantic relationships capturing how the content of different layers are related to each other. Using the semantic scene graph, layers can be more readily selected for editing or other manipulation by the user. Additionally, the semantic scene graph provides an organized summary of the drawing that is more useful to other users who may have additional work to add to the drawing. This is particularly useful for collaborative documents being worked on by remote teams, where additional context is helpful), (see [0048], wherein layer manager allows for selection of layers), and (see [0042-0045], in reference to Fig. 2, wherein portions of an image is displayed including first and second layers based at least on semantic organization, such that differently layers are independently categorized and displayed). Brooks, Zhang, and Muscolino teaches claim 4. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Brooks and Zhang’s system of generating an image based on modified portions of an image including first and second portions to explicitly include portions of the image organized as semantic layers independently displayed, as taught by Muscolino, as the references are in the analogous art of image generation, modification, and analysis with computer assistance. An advantage of the modification is that it achieves the result of allowing for selectable layers of content in the image to better view, analyze, and edit different portions of an image based on semantic organization of portions of the image. Claim 11 claims limitations in scope to claim 4 and is rejected for at least the reasons above. Claim 17 claims limitations in scope to claim 4 and is rejected for at least the reasons above. Claim(s) 5, 12, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brooks et al. (“InstructPiox2Pix: Learning to Follow Image Editing Instructions”) in view of Zhang et al. (“TIE: Revolutionizing Text-based Image Editing for Complex-Prompt Following and High-Fidelity Editing”) and Jorasch et al. (US 20230152906). Re claim 5, Brooks and Zhang teaches claim 1. Brooks and Zhang teaches using inputted prompt data including text for generating image data, but does not explicitly teach wherein at least a portion of the factual information is received from an emergency call. However, Jorasch teaches wherein at least a portion of the factual information is received from an emergency call (see [2220-2223], [2474-2475], wherein speech to text software, including emergency calls, helps translate voice/sound input information into text via software). Brooks, Zhang, and Jorasch teaches claim 5. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Brooks and Zhang’s system of generating an image based on prompt data, including text to explicitly include voice/speech data that is converted to text, as taught by Jorasch, as the references are in the analogous art of text analysis and processing. An advantage of the modification is that it achieves the result of allowing voice/speech data for modifying image data via translation of the audio data to text, including emergency call data. Claim 12 claims limitations in scope to claim 5 and is rejected for at least the reasons above. Claim 18 claims limitations in scope to claim 5 and is rejected for at least the reasons above. Claim(s) 6, 13, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brooks et al. (“InstructPiox2Pix: Learning to Follow Image Editing Instructions”) in view of Zhang et al. (“TIE: Revolutionizing Text-based Image Editing for Complex-Prompt Following and High-Fidelity Editing”) and Chunduru et al. (US 20250157632). Re claim 6, Brooks and Zhang teaches claim 1. Brooks and Zhang do not explicitly teach wherein the factual information is validated prior to generating the image. However, Chunduru teaches wherein the factual information is validated prior to generating the image ([0059] With continued reference to FIG. 1, in some cases, processor 104 may be configured to continuously monitor image generator 156. In an embodiment, processor 104 may configure discriminator to provide ongoing feedback and further corrections as needed to subsequent input data. An iterative feedback loop may be created as processor 104 continuously receive real-time data, identify errors (e.g., distance between generated medical image 152 and real medical images) as a function of real-time data, delivering corrections based on the identified errors, and monitoring subsequent model outputs and/or user feedbacks on the delivered corrections. In an embodiment, processor 104 may be configured to retrain one or more generative machine learning models within image generator 156 based on user modified/annotated medical images or update training data of one or more generative machine learning models within image generator 156 by integrating validated medical images (i.e., subsequent model output) into original training data. In such embodiment, iterative feedback loop may allow image generator 156 to adapt to the user's needs and performance requirements, enabling one or more generative machine learning models described herein to learn and update based on user responses and generated feedbacks). Brooks, Zhang, and Chunduru teaches claim 6. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Brooks and Zhang’s system of generating an image using generative systems to explicitly include validating wherein factual information of an image, as taught by Chunduru, as the references are in the analogous art of generative networks for image analysis/generation. An advantage of the modification is that it achieves the result of validating image data, such as if image data is real or generated as well as further train the system to improve accuracy of identifying real image data from artificially generated image data. Claim 13 claims limitations in scope to claim 6 and is rejected for at least the reasons above. Claim 19 claims limitations in scope to claim 6 and is rejected for at least the reasons above. Claim(s) 7, 14, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brooks et al. (“InstructPiox2Pix: Learning to Follow Image Editing Instructions”) in view of Zhang et al. (“TIE: Revolutionizing Text-based Image Editing for Complex-Prompt Following and High-Fidelity Editing”) and Bradbury et al. (US 20150348284). Re claim 7 Brooks and Zhang teaches claim 2. Brooks and Zhang do not explicitly teach wherein the user modifies at least one of the second or third portions of the image to indicate it should belong to the first portion of the image. However, Bradbury teaches wherein the user modifies at least one of the second or third portions of the image to indicate it should belong to the first portion of the image ([0020] In one embodiment, the editing application generates a new layer for each edit made to the terrain, as well as updates 2D and 3D previews of the edited terrain. Layers provide a non-destructive editing approach that, together with the tool set discussed above, improves the artist's workflow in creating virtual terrains. The layers themselves may be merged or deleted, as desired. As used herein, “merging” generally refers to creating single, merged layer which includes the features of multiple layers being merged. As discussed, in one embodiment the editing application may also provide the user the ability to modify frequencies of each layer, and the editing application may then blend the layers with frequency selective Laplacian blending. The final virtual terrain may be the result of such blending), and (see [0033], [0048], [0054], wherein a user is able to identify and edit different layers to make edits such as deleting or merging image data). Brooks, Zhang, and Chunduru teaches claim 7. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Brooks and Zhang’s system of generating image data of different portions to explicitly include user modifications to indicate portions that should belong to a first portion, as taught by Chunduru, as the references are in the analogous art of image generation and modification. An advantage of the modification is that it achieves the result of allowing for tools for a user to indicate and correct portions of an image that need to be corrected, such as indication and correction of a second portion that should belong to a first portion of an image. Claim 14 claims limitations in scope to claim 7 and is rejected for at least the reasons above. Claim 20 claims limitations in scope to claim 7 and is rejected for at least the reasons above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Peter Hoang whose telephone number is (571)270-1346. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hajnik F. Daniel can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER HOANG/ Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jul 08, 2024
Application Filed
Jan 03, 2026
Non-Final Rejection — §103
Mar 30, 2026
Interview Requested
Apr 06, 2026
Applicant Interview (Telephonic)
Apr 06, 2026
Examiner Interview Summary
Apr 07, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597192
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12582906
SYSTEM FOR GENERATING ANIMATION WITHIN A VIRTUAL ENVIRONMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12561902
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12555318
Systems and Methods for Adaptive Streaming of Point Clouds
2y 5m to grant Granted Feb 17, 2026
Patent 12530841
INTELLIGENT METHOD TO DYNAMICALLY PRIORITIZE AND ORCHESTRATE SPATIAL COMPUTING DATA FEEDS LEVERAGING QUANTUM GENERATIVE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
92%
With Interview (+11.7%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 539 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month