Prosecution Insights
Last updated: April 19, 2026
Application No. 18/202,769

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE NON-TRANSITORY STORAGE MEDIUM

Non-Final OA §101§102§103§112
Filed
May 26, 2023
Examiner
SHARIFF, MICHAEL ADAM
Art Unit
2672
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
94 granted / 115 resolved
+19.7% vs TC avg
Strong +22% interview lift
Without
With
+22.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
16 currently pending
Career history
131
Total Applications
across all art units

Statute-Specific Performance

§101
17.9%
-22.1% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 115 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention “INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE NON-TRANSITORY STORAGE MEDIUM” is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Objections Claim 2 is objected to because of the following informalities: the claim limitation “the unsafety condition being associated with an action indicated by the recognition result” should recite “the unsafety condition being associated with the action of the first person indicated by the recognition result” for proper antecedent basis. Appropriate correction is required. Claims 3-6 are objected to because of the following informalities: the claim term “unsafety information” should recite “the unsafety information” for proper antecedent basis as “unsafety information” was introduced in claim 2 from which claims 3-6 depend from. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 3-5 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Specifically, it is unclear if the antecedent basis of the “the action” refers to “an action of the person” introduced in claim 1, or “an action of the first person” introduced in claim 2. For the sake of examination, Examiner interprets “the action” to refer to “the action of the first person” from claim 2, from which claims 3-5 depend from. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 9, and 10 are rejected are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without integration into a practical application or recitation of significantly more. In the analysis below, the apparatus of independent claim 1 is considered representative of independent claims 9-10 since all of the independent claims recite identical steps despite being directed to different statutory matter. Furthermore, independent claims 1 and 9-10 are directed to one of the four statutory categories of eligible subject matter (a process for independent claim 9 and a computer-readable medium for independent claim 10); thus, the claims pass Step 1 of the Subject Matter Eligibility Test (See flowchart in MPEP 2106). Step 2A, prong 1 analysis: The independent claims are directed to a detection process of detecting a person and an object based on sensor information; a recognition process of recognizing an action of the person based on a relevance between the person and the object; and a generation process of generating unsafety information pertaining to unsafety of the action with reference to a detection result in the detection process and a recognition result in the recognition process. Each of the above steps can be performed mentally. In particular, a construction manager at a construction site overviewing a project such as constructing a building, observes workers doing their jobs such as setting up scaffolding or doing excavation; the manager observes, using their own human vision a specific worker as well as the equipment they are using (an excavator for example to dig into the ground) and recognizes the action they are performing (excavation) based on them sitting inside the excavator and operating the machine; the manager sets off an alarm (unsafety information) to let the worker know that there are other workers within an unsafe distance from the excavation work being done; therefore, this process can all be done mentally. As such, the description in independent claims 1 and 9-10 is an abstract idea – namely, a mental process. Accordingly, the analysis under prong one of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). Additional elements: The additional element recited in independent claims 1 and 9-10 are at least one processor. Step 2A, prong 2 analysis: The above-identified additional elements do not integrate the judicial exception into a practical application. Each of the other additional elements (at least one processor) amounts to merely using different devices as tools to perform the claimed mental process. Implementing an abstract idea on a computer or using known generic devices does not integrate a judicial exception into a practical application (See MPEP 2106.05(f)). Moreover, the additional elements of the claims do not recite an improvement in the functioning of a computer or other technology or technical field, the claimed steps are not performed using a particular machine, the claimed steps do not effect a transformation, and the claims do not apply the judicial exception in any meaningful way beyond generically linking the use of the judicial exception to a particular technological environment (See MPEP 2106.04(d)). Therefore, the analysis under prong two of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). Step 2B: Finally, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Each of the other additional elements (at least one processor) are generic computer features which perform generic computer functions that are well-understood, routine, and conventional and do not amount to more than implementing the abstract idea with a computerized system. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation, and mere implementation on a generic computer does not add significantly more to the claims. Accordingly, the analysis under step 2B of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). For all of the foregoing reasons, independent claims 1 and 9-10 do not recite eligible subject matter under 35 USC 101. Claim 2 recites wherein: in the detection process, the at least one processor detects one or more persons and one or more objects; in the recognition process, the at least one processor recognizes, based on a relevance between a first person among the one or more persons and a first object among the one or more objects, an action of the first person; and in the generation process, in a case where the at least one processor has determined, based on the detection result, that an unsafety condition is satisfied, the at least one processor generates the unsafety information, the unsafety condition being associated with an action indicated by the recognition result, and being related to a second person different from the first person or a second object different from the first object. Continuing the example discussed in claim 1 above, the worker uses the excavator to dig the ground and move rock and dirt observed by the construction manager, and the manager recognizes a safety violation when a second worker is too close in distance to the excavator in operation and sets off an alarm to stop the excavation process to let the workers know the safety issue; therefore, this process can all be done mentally. Claim 3 recites wherein: the second object is an object to be subjected to the action; in the generation process, the at least one processor extracts a feature of the second object based on the detection result; and in the generation process, in a case where an unsafety condition related to the feature of the second object is satisfied, the at least one processor generates unsafety information. Continuing the example discussed in claim 1 above, the worker uses the excavator to dig the ground and move rock and dirt observed by the construction manager, and the manager recognizes a safety violation when the rocks dug up and moved by the excavator into a pile that becomes too large and unstable and risks collapsing endangering other workers to flying rocks, and sets off an alarm to stop the worker from adding more rocks with the excavator to the too-large-pile; therefore, this process can all be done mentally. Claim 4 recites wherein: the second object is an object that is necessary for safety of the action; and in the generation process, in a case where an unsafety condition that the second object has not been detected is satisfied, the at least one processor generates unsafety information based on the detection result. Continuing the example discussed in claim 1 above, the worker uses the excavator to dig the ground and move rock and dirt observed by the construction manager, and the manager recognizes a safety violation when the worker operating the heavy machinery does not have the proper Personal Protective Equipment (PPE) such as a hard hat, safety glasses, high-visibility vest, hearing protection, durable work boots, and gloves, for operating an excavator and sets off an alarm to stop the excavation process to let the worker know the failure of wearing proper equipment; therefore, this process can all be done mentally. Claim 5 recites wherein: the second object is an object that is not associated with the action; and in the generation process, in a case where an unsafety condition that the second object has been detected is satisfied, the at least one processor generates unsafety information based on the detection result. Continuing the example discussed in claim 1 above, the worker uses the excavator to dig the ground and move rock and dirt observed by the construction manager, and the manager recognizes a safety violation when there is falling debris from another part of the job site that lands onto the excavator, or other driven heavy equipment such as a backhoe that moves into the excavation zone where there is a risk of collision with the excavator; if the manager sees this they set off an alarm to stop all workers from operating the all the machinery; therefore, this process can all be done mentally Claim 6 recites wherein: in the generation process, in a case where an unsafety condition that the second person has been detected in a predetermined range from the first object is satisfied, the at least one processor generates unsafety information based on the detection result. Continuing the example discussed in claim 1 above, the worker uses the excavator to dig the ground and move rock and dirt observed by the construction manager, and the manager recognizes a safety violation when a second worker is too close in distance to the excavator in operation and sets off an alarm to stop the excavation process to let the workers know the safety issue; therefore, this process can all be done mentally. Claim 7 recites an output process of outputting the unsafety information. Continuing the example discussed in claim 1 above, the construction manager sets off an alarm in response to visually seeing a safety violation by one of the workers operating the excavator; therefore, this process can all be done mentally. Claim 8 recites wherein: in the detection process, the at least one processor detects the person and the object with reference to a detection model which has been generated by machine learning; in the recognition process, the at least one processor recognizes the action with reference to a recognition model which has been generated by machine learning; and the at least one processor further carries out an acquisition process of acquiring feedback information, the feedback information being used to retrain one or both of the detection model and the recognition model with respect to output of the unsafety information, and the feedback information indicating whether or not the action is an unsafe action. Machine learning models are generic machines and not particular machines that do not amount to more than the judicial exception in terms of detecting objects and persons; continuing the example discussed in claim 1 above, the construction manager sets an alarm off and stops the worker from operating the excavator and then trains the worker on proper safety protocol at the construction site, while also noting down the incident to remember for future observation of the job site of different types of safety violations such as lack of personal protective wear (PPE), or a lack of awareness of using machinery when other workers are too close to the operation zone; by noting and remembering the specific incidents the construction manager gives more formal training and reminders each work day about specific safety protocol so the safety violations are less likely to happen in the future; training generic computer models with feedback is akin to human memory of an event observed; therefore, this process can all be done mentally. Therefore, dependent claims 2-8 recite the same abstract idea of a mental process which can be performed in the mind with the aid of pen and paper, and are therefore also rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-7 and 9-10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Application Publication No.: 2021/0109497 (Man et al.) (hereinafter Man). Regarding claim 1, Man teaches an information processing apparatus, comprising at least one processor, the at least one processor carrying out: (Man, para. [0342]-[0345]: “FIG. 11 is a block diagram that illustrates an example computer system with which an embodiment may be implemented. In the example of FIG. 11, a computer system 1100 and instructions for implementing the disclosed technologies in hardware, software … At least one hardware processor 1104 is coupled to I/O subsystem 1102 for processing information and instructions … Computer system 1100 includes one or more units of memory 1106, such as a main memory, which is coupled to I/O subsystem 1102 for electronically digitally storing data and instructions to be executed by processor 1104.”) a detection process of detecting a person and an object based on sensor information (Man, para. [0176]-[0181]; FIG. 4E: “FIG. 4E depicts an example workflow for identifying and monitoring health and safety risks in industrial sites … For example, in some embodiments, some steps may be performed by activity/object detector 110G, other steps may be performed by characteristics analyzer 110H, and yet other steps may be performed by health and safety risks identifier 110L … In step 410, the processor receives a plurality of data inputs from a plurality of data input devices installed in an industrial site. The data inputs may include video recordings, digital photographs, sensor data, and the like, collected by the data input devices, such as video cameras, digital cameras, sensors, and other devices configured to collect data specific to industrial sites … In step 420, the processor selects a data model from one or more data models that have been trained on training data for industrial sites and programmed to detect activities or objects associated with the workers or equipment present at the industrial sites. Training of the data model may involve providing, as input, the training data that allow the model to identify (based on video input data, photographs, and the like) the workers, objects, pieces of equipment, materials, constructions elements, etc., that may be present in industrial sites.) … In step 430, the processor applies the data inputs to the data model to cause the data model to evaluate the data inputs and generate output data specifying whether the inputs describe activities or objects indicative of the presence of workers, equipment, and the like at the industrial site”; PNG media_image1.png 512 890 media_image1.png Greyscale ); a recognition process of recognizing an action of the person based on a relevance between the person and the object (Man, para. [0182]-[0184]; para. [0085]; para. [0186]; FIG. 4E; FIG. 4A-4C; para. [204]; para. [0196]: “Applying the data inputs to the data model and evaluating the data model with the data inputs may include identifying, in the inputs, a plurality of digital images that depict the workers working at the industrial site and the pieces of equipment present at the industrial site. It may also include identifying, based on the images, the pieces of equipment depicted in the images, and generating output indicating whether one or more activities or objects associated with workers or the equipment are present at the industrial site. The specific details about the way in which the output is generated depend on the implementation and the type of the machine learning model being used … In step 450, the processor determines, based on the output data, one or more characteristics of the one or more activities or objects. The characteristics of the activities and objects may be the characteristics that may cause any health or safety risks at the industrial site, especially, that pose an excessive risk at the industrial site or pose a risk that is greater than a normal risk at the industrial site.”; “The characteristics may indicate, for example, the placement and location of the objects (or persons) in relation to other objects (or persons).”; “In step 460, based on the or more characteristics, the processor determines whether the one or more activities or objects cause any health or safety risks at the industrial site. This may include determining whether a data repository, which stores one or more mappings of characteristics onto risks posed in the industrial sites, includes the entries for the one or more characteristics. If the processor determines that such entries are present in the repository, then the processor may retrieve those entries, extract the risk identifiers from the entries, and use the extracted risk identifiers as indications of the health or safety risks posed in the industrial site.”; see steps 450 and 460 in FIG. 4E above; PNG media_image2.png 554 886 media_image2.png Greyscale ; PNG media_image3.png 570 908 media_image3.png Greyscale ). PNG media_image4.png 482 896 media_image4.png Greyscale ; “Hence, according to the example depicted in FIG. 4A, if, in step 460 of FIG. 4E, the processor determines, based on the content of the input frames, (1) activity characteristic 4304 indicating that a worker is working at an industrial site, and (2) steel erection characteristic 4310 indicating that the worker is erecting a steel pole as he sets up the scaffolds, and (3) potential fall from heights characteristic 4320 indicating that, as the worker sets the scaffolds in an upper level of the scaffolding, he might fall, then the processor may determine that those characteristics indicate safety risk 4302 that belongs to risks 4100.”; “For example, suppose that a video camera provided a sequence of frames that depicts a worker claiming a ladder. Furthermore, suppose that a machine learning model was applied to the sequence of video frames and the model was evaluated based on the frames to generate and provide output data. Suppose that the output data indicate that the worker was detected in the frames and it was detected that the worker was climbing the ladder. Suppose that based on the output data, computer 110 (of FIG. 1) determines one or more characteristics indicating that the worker was climbing the ladder. Those characteristics may be used to lookup a repository of the characteristics of the risks associated with the industrial sites, and if a match is found, then computer 110 may determine that the input data provided from the industrial site indicate a risk associated with the worker climbing the ladder.”); and a generation process of generating unsafety information pertaining to unsafety of the action with reference to a detection result in the detection process and a recognition result in the recognition process (Man, para. [0188]-[0190]; FIG. 4E; FIG. 4D: “If, in step 470, the processor determines that the activities or objects cause the health/safety risks, then the processor proceeds to performing step 480 … In step 480, the processor determines actions based on the identified risks. The processor may, for example, generate one or more notifications that indicate the health or safety risks at the industrial site and transmit the notifications to notification recipients, such as managers, security officers, the police, and the like. Also, in step 480, the processor may, for example, generate a graphical representation of the health or safety risks posed at the industrial site and transmit the representation to a computer display device to cause the computer display device to generate, based on the graphical representation, a GUI depicting the one or more health or safety risks posed at the industrial site. The computer display device may be a device that is operated by a manager, a security officer, or any other person in charge of the safety of the site.”; see steps 470 and 480 in FIG. 4E above; PNG media_image5.png 440 782 media_image5.png Greyscale ). Regarding claim 2, Man teaches the information processing apparatus according to claim 1, wherein: in the detection process, the at least one processor detects one or more persons and one or more objects (Man, para. [0176]-[0181]; FIG. 4E; see rejection of claim 1 above); in the recognition process, the at least one processor recognizes, based on a relevance between a first person among the one or more persons and a first object among the one or more objects, an action of the first person (Man, para. [0182]-[0184]; para. [0085]; para. [0186]; FIG. 4E; FIG. 4A-4C; para. [204]; para. [0196]; see rejection of claim 1 above); and in the generation process, in a case where the at least one processor has determined, based on the detection result, that an unsafety condition is satisfied, the at least one processor generates the unsafety information, the unsafety condition being associated with an action indicated by the recognition result (Man, para. [0186]; FIG. 4E; FIG. 4A-4C; para. [204]; para. [0196]; para. [0188]-[0190]; FIG. 4E; FIG. 4D; see rejection of claim 1 above discussing both examples of a worker (person) on a ladder (object) and a worker (person) erecting a steel pole (object) as they set up a scaffold; the characteristics of the action detected indicate that the worker might fall and therefore indicates a potential safety risk; a repository is checked storing characteristics that if they match the action recognized (unsafety condition), a risk identifier is found associated with the characteristic such as falling from a height for both the ladder and scaffolding examples) and being related to a second person different from the first person or a second object different from the first object (Man, para. [0205]-[0206]: “According to another example, if, in step 460 of FIG. 4E, the processor determines, based on the content of the input frames, (1) activity characteristic 4304 indicating that a worker is working at an industrial site, and (2) steel erection characteristic 4310 indicating that the worker is erecting a steel pole as he sets up the scaffolds, and (3) struck by a flying object characteristic 4322 indicating that, as the worker sets the scaffolds in an upper level of the scaffolding, he might have been struck by some lose scaffold, then the processor may determine that those characteristics indicate safety risk 4302 that belongs to risks 4100. According to other examples, in step 460 of FIG. 4E, the processor determines, based on the content of the input frames, (1) activity characteristic 4304 indicating that a worker is working at an industrial site, and (2) steel erection characteristic 4310 indicating that the worker is erecting a steel pole as he sets up the scaffolds, and (3) hazard due to a tool usage characteristic 4322 indicating that, as the worker sets the scaffolds in an upper level of the scaffolding, he might have dropped one of his tools, then the processor may determine that those characteristics indicate safety risk 4302 that belongs to risks 4100.”; these additional examples includes a tool (second object) that has been dropped while the worker (person) is erecting the steel pole (first object) as they set up scaffolds (action) as well as a lose scaffold/flying object (second object) that hits the worker (person) while the worker is erecting the steel pole (first object) during scaffolding (action)). Regarding claim 3, Man teaches the information processing apparatus according to claim 2, wherein: the second object is an object to be subjected to the action; in the generation process, the at least one processor extracts a feature of the second object based on the detection result; and in the generation process, in a case where an unsafety condition related to the feature of the second object is satisfied, the at least one processor generates unsafety information (Man, para. [0206]; see rejection of claim 2 above; the example of a worker (person) erecting a steel pole (first object) as they set up scaffolding (action) using a tool (second object) that is then dropped which causes a hazard tool usage characteristic (feature) to be detected by the machine learning image processing model; if the condition of the tool falling from the person using it while erecting the steel pole in the air during scaffolding setup is met then the characteristic leads to a risk assessment and then a safety score is output; see hazards due to tool usage 4324 in FIG. 4B above in the rejection of claim 1; one of ordinary skill in the art of construction knows that one needs tools to erect a steel pole in a scaffold, including wrenches to tighten connections, a level to ensure its plumb and level, and a podger spanner for alignment; other essential tools include a hammer and a tape measure; therefore this meets the broadest reasonable interpretation of the claim limitation “the second object to be subjected to the action” because the tools are used to carry out the steel pole erecting process in scaffolding setup). Regarding claim 4, Man teaches the information processing apparatus according to claim 2, wherein: the second object is an object that is necessary for safety of the action; and in the generation process, in a case where an unsafety condition that the second object has not been detected is satisfied, the at least one processor generates unsafety information based on the detection result (Man, para. [0224]; FIG. 4C-4D: “In some embodiments, safety risks may be detected (4110) based on determining (4112) lack of protective personal equipment (PPE), which may include workman gloves, workman suites, workmen shoes, workman masks, workman helmets, workman headsets, and the like. If it is expected that, while performing certain activities, the workers are expected to wear, for example, the helmets, then lack of depictions of the helmets in the pictures showing the workers at the industrial site may indicate safety risks at the site.”; in the examples discussed in the rejection of claim 2, of a worker (person) erects a steel pole (first object) while setting up a scaffold (action); a second related object is the lack of a personal protective equipment necessary to work at heights needed to set up the scaffold; protective equipment is necessary for safe scaffolding set-up and the lack of the PPE is detected during the action and if PPE is not detected then the safety risk (unsafety information) output by the ML model includes a lack of PPE warning; see 4112 in FIG. 4D and 4012C in FIG. 4C in the rejection of claim 1 above). Regarding claim 5, Man teaches the information processing apparatus according to claim 2, wherein: the second object is an object that is not associated with the action; and in the generation process, in a case where an unsafety condition that the second object has been detected is satisfied, the at least one processor generates unsafety information based on the detection result (Man, para. [0205]; see rejection of claim 2 above; the example of a lose scaffold/flying object (second object) that hits the worker (person) while the worker is erecting the steel pole (first object) during scaffolding set-up (action); see 4322 struck by a flying object in FIG. 4B above in the rejection of claim 1 as a characteristic found in image analysis/detection that leads to a risk assessment indicating a safety risk and an output of a safety score). Regarding claim 6, Man teaches the information processing apparatus according to claim 2, wherein: in the generation process, in a case where an unsafety condition that the second person has been detected in a predetermined range from the first object is satisfied, the at least one processor generates unsafety information based on the detection result (this claim is ignored because it depends from claim 2 which recites “the unsafety condition … being related to a second person different from the first person or a second object different from the first object”; Examiner, in the rejection of claim 2, from which claim 6 depends from, mapped the citation of Man to a second object and not a second person; because these two embodiments were introduced in the alternative, any further claims dependent from claim 2 referring to the second person are ignored because Examiner chose to reject the second object and not the second person; therefore, claim 6 is ignored by Examiner). Regarding claim 7, Man teaches the information processing apparatus according to claim 1, wherein: the at least one processor further carries out an output process of outputting the unsafety information (Man, para. [0188]-[0190]; FIG. 4E; FIG. 4D; see rejection of claim 1 above discussing the GUI notification output of the safety risk at the industrial site). With regards to claim 9, it recites the functions of the apparatus of claim 1, as a process. Thus, the analysis in rejecting claim 1 is equally applicable to claim 9. Regarding claim 10, Man teaches a computer-readable non-transitory storage medium storing a program for causing a computer to function as an information processing apparatus, the program causing the computer to carry out: (Man, para. [0345]: “Computer system 1100 includes one or more units of memory 1106, such as a main memory, which is coupled to I/O subsystem 1102 for electronically digitally storing data and instructions to be executed by processor 1104 … Memory 1106 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1104. Such instructions, when stored in non-transitory computer-readable storage media accessible to processor 1104, can render computer system 1100 into a special-purpose machine that is customized to perform the operations specified in the instructions.”). With regards to the remaining limitations of claim 10, they recite the functions of the apparatus of claim 1, as a computer-readable non-transitory storage medium storing a program. Thus, the analysis in rejecting claim 1 is equally applicable to the remaining limitations of claim 10. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Man, in view of Korean Patent Publication No.: KR 102322644 B1 (Lee et al.) (hereinafter Lee). Regarding claim 8, Man teaches the information processing apparatus according to claim 7, wherein: in the detection process, the at least one processor detects the person and the object with reference to a detection model which has been generated by machine learning; and in the recognition process, the at least one processor recognizes the action with reference to a recognition model which has been generated by machine learning (Man, para. [0182]-[0184]; para. [0085]; para. [0186]; FIG. 4E; FIG. 4A-4C; para. [204]; para. [0196]; see rejection of claim 1 above; Man, pare. [0102]: “In some embodiments, a decision support, and data processing method for monitoring activities on industrial sites is configured to employ a machine learning approach to process the data received from a distributed network of sensors. The sensors may provide the data expressed in various data formats. The machine learning system may be programmed to process the collected data, and generate outputs that include activity records, activity metrics, and activity-based alerts. The outputs may be used to generate warnings and alarms that may be used to deter safety violations, corruption, and inefficiencies in using mechanical equipment, industrial materials, and other resources. The warnings and alarms may be particularly useful in managing large-scale industrial sites.”). Man fails to teach the at least one processor further carries out an acquisition process of acquiring feedback information, the feedback information being used to retrain one or both of the detection model and the recognition model with respect to output of the unsafety information, and the feedback information indicating whether or not the action is an unsafe action. Lee teaches the at least one processor further carries out an acquisition process of acquiring feedback information, the feedback information being used to retrain one or both of the detection model and the recognition model with respect to output of the unsafety information, and the feedback information indicating whether or not the action is an unsafe action (Lee, page 15, para. 4-5; FIG. 7; page 10, para. 1; page 26, para. 4; page 27, para. 3-5: “To this end, the computing device 100 may provide a labeling guide for the operator on the labeling screen. For example, when it is determined that the object recognized on the screen is a ladder, the computing device 100 may provide a guide for inputting a point in the leg portion in more detail. In addition, it provides a description of a dangerous situation that may occur in relation to the ladder, and can provide a guide to input the part corresponding to the dangerous situation in more detail. In an embodiment, when labeling is completed, the computing device 100 may attempt object recognition and dangerous situation recognition for the labeled object. For example, the computing device 100 may determine what the labeled object is, whether the object is in a dangerous situation, and the type of the dangerous situation by using the model learned according to the disclosed embodiment. In this case, when the accuracy of object recognition or dangerous situation recognition is less than or equal to a preset reference value, it is possible to request the operator to label again.”; PNG media_image6.png 428 540 media_image6.png Greyscale ; “In one embodiment, the operator terminal 200 may be connected to the learning apparatus 100 of the artificial intelligence model through the network 400, and a user interface (UI) (User Interface, UI) from the learning apparatus 100 of the artificial intelligence model ( For example: Figures 4, 6, 8, 9, 15, 16, 17, 18) may be provided, and actions related to the learning process of the artificial intelligence model (eg, generating and managing training data, inputting feedback information, etc.) may be provided through the UI.) can be done.”; “In step S240 , the computing device 100 may collect feedback information from the operator in response to the guide information provided through step S230. For example, the computing device 100 may provide a feedback information input UI 60 (eg, FIGS. 17 and 18) for receiving feedback information, and collect feedback information through the feedback information input UI 60. In this case, the computing device 100 may store the work site image including the result of the action according to the action method included in the first feedback information input from the operator and the second feedback information as a best practice image. In step S250, the computing device 100 may train the artificial intelligence model by using the feedback information obtained in step S240 as training data. Here, the method of learning the artificial intelligence model may be implemented in the same or similar form as the learning method of the artificial intelligence model described in steps S110 to S130 of FIG. 3 (eg, supervised learning by labeling feedback information on a work site image)”). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the at least one processor, as taught by Man, to carry out an acquisition process of acquiring feedback information, the feedback information being used to retrain one or both of the detection model and the recognition model with respect to output of the unsafety information, and the feedback information indicating whether or not the action is an unsafe action, as taught by Lee. The suggestion/motivation for doing so would have been that re-training supervised machine learning models in construction safety improves accuracy by incorporating new data to predict hazards more effectively and proactively mitigate risks; this process allows the model to learn from updated accident data, near-misses, and real-time site conditions, leading to more accurate predictions for future safety issues like equipment malfunctions, dangerous behaviors, and improper personal protective equipment (PPE) use. Therefore, it would have been obvious to combine Man, with Lee, to obtain the invention as specified in claim 8. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent Application Publication No.: 2023/0206693 and U.S. Patent Application Publication No.: 2022/0067547. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL ADAM SHARIFF whose telephone number is 571-272-9741. The examiner can normally be reached M-F 8:30-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL ADAM SHARIFF/ Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Nov 15, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602903
Method for Analyzing Image Information Using Assigned Scalar Values
2y 5m to grant Granted Apr 14, 2026
Patent 12579776
DISPLAY DEVICE, DISPLAY METHOD, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12561959
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR TARGET IMAGE PROCESSING
2y 5m to grant Granted Feb 24, 2026
Patent 12548293
IMAGE DETECTION METHOD AND APPARATUS
2y 5m to grant Granted Feb 10, 2026
Patent 12541976
RELATIONSHIP MODELING AND ANOMALY DETECTION BASED ON VIDEO DATA
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+22.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 115 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month