Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Objections
Claims 1, 21 -21 are objected to because of the following informalities: Claims 1, 21 -21 recite “AI”. AI is an acronym and acronyms must be spelled out at least once, before using the acronym.
Claim 1 recites “the generated images” in line 13. It should be “the plurality of generated images”
Claim 22 recites “the system” in line 3. It should be “the computer system”.
Claim 22 recites “the generated images” in line 15. It should be “the plurality of generated images”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "generated synthetic images” in lines 11-12. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 also recites the limitation “each of the images” in line 12. It is unclear if “the images” refers to “the plurality of image”; or ‘“generated synthetic of images” or something else.
Claims 2-3, 5 recite the phrase “should be performed” introduces ambiguity about where the claim requires second iteration of image data generation and analysis to actually occur or merely be capable of occurring.
Claim 4 is rejected base on rejection of claims 2-3.
Claim 5 recites the limitation "the plurality of synthesized images” in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 6 is rejected based on rejection of claim 5.
Claims 7-8 recite the phrase “should not be performed” introduces ambiguity about where the claim requires third iteration of image data generation and analysis to actually does not occur or merely be not capable of occurring.
Claims 9-10 are rejected based on rejection of claims 7-8.
Claim 18 depends from claim 1 recites the limitations “generating the performance data for an image of the plurality of images” in lines 1-2 while claim 1 recites “generating, for each image of the generated images”, performance data..” It is unclear if “images” refers to “generated images” or something else.
Claims 2-20 are rejected based on rejection of claim 1.
Claim 21 recites the limitations “each of the images” in line 13. It is unclear if “the images” refers to “the plurality of images” or “the plurality of generated images” or something else.
Claim 21 recites “the synthesized images” in line 14. There is insufficient antecedent basis for this limitation in the claim.
Claim 22 recites the limitations “each of the images” in line 14. It is unclear if “the images” refers to “the plurality of images” or “the plurality of generated images” or something else.
Note: No prior art found disclose or render obvious independent claims 1, 21 or 22.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
BHAKTHAVATSALAM et al, U.S Patent Application Publication No.2025012462
A device includes a processor, and a memory storing executable instructions which, when executed by the processor, cause the processor alone or in combination with other processors to perform the following functions: receive textual user input from a user describing a design to be generated; implement a first prompt generator to generate a first prompt for a Large Language Model (LLM) to restructure the user input; and implement a second prompt generator to generate a second prompt for a text-to-image model using output of the LLM to produce, the second prompt to prompt the text-to-image model to produce a proposed design based on the user input. The proposed design is provided to the user via an application comprising controls for further editing the proposed design.
2. Khan et al, U.S Patent Application Publication No.20240338946 - Provided are computer-implemented technologies for crowd analysis. The technologies process input video footages of a crowd to detect people from the input, uniquely identify the detected people, track and count them across all the frames throughout the footages, classify them based on features of the identified people such as detected gender of the identified people. The technologies use customarily trained AI models that are specifically trained and retrained for detecting people in the local population that typically wear Arab style clothing. The learned models, through the training, quality controlling and retraining, attain the prediction capability for accurately detecting, identifying and classifying people from crowd scenes. The learned model, then, is applied to crowd analysis to provide detection and classification report of the crowd, which can be used for deeper analysis to fill various business/institutional needs.
3. HOFFMAN et al, U.S Patent Application Publication No.20250308225 System 100 for implementing training of a pre-trained object detection model for detecting new object classes. System 100 includes a computing system 105. In some examples, computing system 105 includes orchestrator 110, which may include at least one of one or more processors 115a, a data storage device 115b, a user interface (“UI”) system 115c, and/or one or more communications systems 115d. In some cases, computing system 105 may further include artificial intelligence (“AI”) system 120 that trains and uses an object detection model 125, which is a model that has a general architecture including a backbone portion 125a and a head portion 125b. The backbone portion 125a is configured to detect or identify (in some cases, to compute) general features in an image and to generate a numerical representation for each general feature, while the head portion 125b is configured to perform at least one of classification, confidence determination, and/or bounding box detection, in some cases, based on the numerical representation for each general feature. In examples, object detection model 125 includes a You Only Look Once (“YOLO” or “YOLOX”) convolutional neural network (“CNN”)-based model, a Region-Based Convolutional Neural Networks (“R-CNNs”)-based model, a Scale-Invariant Feature Transform (“SIFT”)-based model, or Histogram of Oriented Gradients (“HOG”)-based model. In other examples, the object detection model 125 includes any suitable neural network that has a backbone portion and a head portion as described above. In yet other examples, the object detection model 125 includes any suitable neural network or machine learning (“ML”) model that is configured to perform object detection.
4. CHUANG et al, U.S Patent Application Publication No.2025/0252579 - Performing computer-implemented process involves receiving frames of a scene, the frames including an object, generating a first bounding box so that the object in a first frame of the frames is within the first bounding box, providing a first prompt based on the first bounding box. The object is segmented from the first frame based on the first prompt to provide a first mask, a second bounding box is generated so that the object in a second frame of the frames is within the second bounding box based on the first mask. The second prompt based on the second bounding box is provided, and the object is segmented from the second frame based on the second prompt to provide a second mask.
5. Cho et al., U.S Patent Application Publication No.20250245871 -
Provided is a method for generating images, which is performed by one or more processors, and includes receiving a first dot associated with a first object, and generating a first synthesized image based on the first dot using an image generation model, in which the first dot includes first class information and first position information associated with the first object, and the first synthesized image is a synthesized image in which the first object corresponding to the first class information is placed at the first position.
Wang et al., U.S Patent Application Publication No.20250131110-
Computer-implemented methods, systems, and devices for generating security challenges are described. An identity management system may obtain image descriptions. The image descriptions may include a first image description set that corresponds to a sequence of events and a second image description set that is unassociated with the sequence of events. The identity management system may obtain images based on the image descriptions. The images may include a first image set that corresponds to the sequence of events and a second image set that is unassociated with the sequence of events. The identity management system may generate a security challenge using the images. The security challenge may request for a user to identify the sequence of events from the images. Identification of the sequence of events may be based on each image of the first image set being contextually relevant to the sequence of event
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARAH LE whose telephone number is (571)270-7842. The examiner can normally be reached Monday: 8AM-4:30PM EST, Tuesday: 8 AM-3:30PM EST, Wednesday: 8AM-2:30PM EST, Thursday and Friday off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARAH LE/Primary Examiner, Art Unit 2614