DETAILED ACTION
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4-6, 11-12, 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Barbosa, and further in view of Varol et al (Varol G, Laptev I, Schmid C, Zisserman A. Synthetic humans for action recognition from unseen viewpoints. International Journal of Computer Vision. 2021 Jul;129(7):2264-87.) and Tripathi et el (Tripathi, Shashank, et al. "Learning to generate synthetic data via compositing." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.).
RE claim 1, Barbosa teaches A method comprising (abstract, Fig 1):
obtaining, by one or more processors, a background image of a building/work site (Fig 2#250 col 6 lines 1-12);
obtaining, by the one or more processors, a subject image depicting a subject (Fig 2#200, col 5 lines 3-10);
generating, by the one or more processors using an artificial intelligence (AI) model, a blended image by combining the subject image with the background image, the blended image depicting the subject at the building/work site (Figs 1, 6 abstract, col 2 lines 53-59); and
providing, by the one or more processors, the blended image as training data input to at least one neural network to configure the at least one neural network using the blended image (Figs 1, 6, abstract, col 3 lines 22-25).
Barbosa is silent RE: a subject event and the blended image depicting the subject event occurring at the building/work site using a generative artificial intelligence (AI) model. However Varol teaches in generating a synthetic image/video depicting the subject event occurring at the building/work site for training human action detection from the synthetic images/videos in Figs 1-3, abstract, page 2267 col 1. In addition Tripathi teaches the blended image depicting the subject event occurring at the building/work site using a generative artificial intelligence (AI) model in Fig 2, abstract, page 463 col 1, in order to create event based synthetic data.
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Barbosa a system and method of an input subject event image and the blended image depicting the subject event occurring at the building/work site using a generative artificial intelligence (AI) model, as set forth above combining Varol and Tripathi in order to create event based synthetic data on the specific background image utilizing the generative network to create realistic training datasets, and thereby increasing system effectiveness and user experience.
RE claim 2, Barbosa as modified by Varol and Tripathi teaches wherein obtaining the subject image comprises generating the subject image using a second generative AI model (Varol Figs 1-3, abstract, page 2267 col 1, page 2278 col 2).
RE claim 4, Barbosa as modified by Varol and Tripathi teaches wherein obtaining the subject image comprises: obtaining a reference image depicting the subject event occurring at a location other than the building/work site; identifying a portion of the reference image depicting the subject event; and extracting the portion of the reference image depicting the subject event (Varol Figs 1-3, abstract, page 2267 col 1).
RE claim 5, Barbosa as modified by Varol and Tripathi teaches further comprising labeling, by the one or more processors, the subject image or the blended image with one or more tags or attributes identifying at least one of the subject event depicted in the subject image or the blended image or a boundary defining a portion of the subject image or the blended image depicting the subject event (Barbosa abstract, Fig 1, col 3 lines 2-5, col 5 lines 20-23. In addition Varol abstract, page 2266 col 1 “assign an action label to synthetic videos and define the supervision directly on action classification”.).
RE claim 6, Barbosa as modified by Varol and Tripathi teaches further comprising: obtaining, by the one or more processors, new camera images from the building/work site; providing, by the one or more processors, the new camera images as input to the at least one neural network; and detecting, by the one or more processors, the subject event occurring at the building/work site based on an output of the neural network provided responsive to the new camera images (Varol Figs 1-3, abstract, page 2267 col 1).
Claims 11-12, 14-16 recite limitations similar in scope with limitations of claims 1-2, 4-6 and therefore rejected under the same rationale. In addition Barbosa teaches A system comprising: one or more processors; and one or more non-transitory computer-readable media storing (Fig 10, col 29 line 29-31).
Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Barbosa as modified by Varol and Tripathi, and further in view of Shi (US 20240355022 A1) and Pandya et al (US 11675878 B1).
RE claim 3, Barbosa as modified by Varol and Tripathi is silent RE wherein generating the subject image using the second generative AI model comprises: receiving a prompt comprising one or more risk criteria describing a risk associated with a person, building system, device, or piece of equipment; providing the prompt as input to the second generative AI model; and generating the subject image as an output of the second generative AI model in response to the prompt. However Shi teaches receiving a prompt comprising one or more criteria describing a concept/action associated with a person, building system, device, or piece of equipment; providing the prompt as input to the second generative AI model; and generating the subject image as an output of the second generative AI model in response to the prompt in Figs 2-4, abstract, [0020]-[0021], [0027], [0043], [0073] etc for generating a customized image using a machine learning model and a text description. Shi is silent RE risk criteria describing a risk. However this appears to be an obvious design or application specific choice. For example Pandya teaches auto labelling/training a safety and risk detection or management related to various aspects of industrial workplace system that can detect risky event based on pretrained data with contextual description of scene/event/workflow in abstract, Col 7 lines 55- 64, to trigger appropriate alarms in col 2 lines 52-59, with risk criteria eg., proximity threshold col 3 lines 65-col 4 lines 3 to prevent a collision, or any risky/hazardous events eg.,col 10 lines 55-65. This can be equally combined to generate the subject events based on text descriptions of risk criterions as readily recognized one of ordinary skill in the art.
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Barbosa as modified by Varol and Tripathi a system and method receiving a prompt comprising one or more risk criteria describing a risk associated with a person, building system, device, or piece of equipment; providing the prompt as input to the second generative AI model; and generating the subject image as an output of the second generative AI model in response to the prompt, as set forth above combining Shi and Pandya in order to create risk event based synthetic data based on textual prompt to manage risk in industrial environment, and thereby increasing system effectiveness and user experience.
Claim 13 recites limitations similar in scope with limitations of claim 3 and therefore rejected under the same rationale.
Claims 7-9 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Barbosa as modified by Varol and Tripathi, and further in view of Pandya et al.
RE claim 7, Barbosa as modified by Varol and Tripathi is silent RE further comprising triggering, by the one or more processors, an alarm in response to detecting the subject event in the new camera images.
However Pandya teaches in in col 2 lines 52-59, col 10 lines 55-65 to trigger appropriate alarms and prevent a collision, or any risky/hazardous events.
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Barbosa as modified by Varol and Tripathi a system and method of triggering, by the one or more processors, an alarm in response to detecting the subject event in the new camera images, as suggested by Pandya, to prevent a collision, or any risky/hazardous events and thereby increasing system effectiveness and user experience.
RE claim 8, Barbosa as modified by Varol, Tripathi and Pandya teaches wherein triggering the alarm comprises: identifying a person or group responsible for addressing the alarm based on a type of the alarm and a role of the person or group; and transmitting the alarm to the identified person or group responsible for addressing the alarm (Pandya col 2 lines 52-59, col 10 lines 55-65).
RE claim 9, Barbosa as modified by Varol, Tripathi and Pandya teaches further comprising triggering, by the one or more processors, an automated intervention in response to detecting the subject event in the new camera images (Pandya col 2 lines 52-59, col 10 lines 55-65).
Claims 17-19 recite limitations similar in scope with limitations of claims 7-9 and therefore rejected under the same rationale.
Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Barbosa as modified by Varol, Tripathi and Pandya, and further in view of Mathew et al (US 20240183132 A1).
RE claim 10, Barbosa as modified by Varol, Tripathi and Pandya is silent RE wherein the subject event comprises faulty operation of building equipment and the automated intervention comprises adjusting an operation of the building equipment in response to detecting the faulty operation. However Mathew teaches in abstract, Fig 4, [0024], [0027], [0031]- [0032] etc. to handle damages machine.
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Barbosa as modified by Varol, Tripathi and Pandya a system and method wherein the subject event comprises faulty operation of building equipment and the automated intervention comprises adjusting an operation of the building equipment in response to detecting the faulty operation to assure appropriate action due to the faulty condition, as suggested by Mathew and thereby increasing system effectiveness and user experience.
Claim 20 recites limitations similar in scope with limitations of claim 10 and therefore rejected under the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached 892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SULTANA MARCIA ZALALEE whose telephone number is (571)270-1411. The examiner can normally be reached Monday- Friday 8:00am-4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Sultana M Zalalee/ Primary Examiner, Art Unit 2614