DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on May 14, 2025 is in compliance with 37 CFR 1.97 and 1.98 and therefore has been considered by the examiner and placed in the file.
Response to Arguments
Applicant’s arguments with respect to claims 1 and 7 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation (BRI) using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. BRIs for particular claim terms are provided below. Should Applicant believe that these interpretations are inaccurate, Applicant should point to the portions of the specification that provide a basis for different interpretations.
The BRI of a claim element is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. In the nonfinal Office Action, particular claim terms were interpreted as invoking 35 U.S.C. 112(f) because the terms used were generic placeholders coupled with functional language without reciting sufficient structure to perform the recited function. The claims have been amended to replace the generic placeholder terms with terms that connote structure to perform the recited functions. Therefore, 35 U.S.C. 112(f) is no longer invoked.
Below are BRIs for the particular terms:
first learned model: para. [0106]-[0110], software and/or firmware executed by a processor such as a CPU, a DSP, an ASIC, a FPGA;
plurality of objects: para. [0026], objects being handled during the manufacturing work, such as a speaker, a smartphone and a smartphone carrier, as well as the worker's hand(s) and a tool being used by the worker;
work state: Figs. 26 and 27 and para. [0040]-[0041], the stage of the manufacturing process and the operation being performed at the stage;
process of a manufacturing work of a product: para. [0013], a manufacturing process for manufacturing a product, such as a smartphone;
determination target image: para. [0061], an image of the hand(s) of the worker during the manufacturing process;
second learned model: para. [0106], software and/or firmware executed by a processor such as a CPU, a DSP, an ASIC, a FPGA;
landmark: something in the work area that is used as a reference position that the positions of the objects will be determined relative to;
state transition model: Fig. 26, para. [0040], a state machine, which can be represented by a state diagram, modelling the transition between the work states;
anteroposterior relationship: para. [0117], the relationship of the work states to one another; and
cumulative result: paras. [0077]-[0079] and Figs. 34-36, the sum of, for each work state, the number of times that the determination unit determines the manufacturing process to be in the work state over a number of image frames.
Claim Rejections - 35 USC § 101
Applicant’s arguments regarding the rejection of the claims under 35 U.S.C. 101 are persuasive. Additionally, the claims as amended recite limitations that cannot practically be performed in the mind of a human being. Accordingly, the rejection of claims 1-7 under 35 U.S.C. 101 is withdrawn.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publ. Appl. No. 2020/0209836 A1 to Bauer et al. (hereinafter referred to as “Bauer”) in view of U.S. Pat. No. 10,729,502 B1 to Wolf et al. (hereinafter referred to as “Wolf”).
Regarding claim 1, Bauer discloses a work management device (Fig. 1, manufacturing control system 1, para. [0080]) comprising:
a memory (Fig. 1, paras. [0080]-[0082], manufacturing execution system (MES) 3 can be implemented on a cloud that includes “an external server with computing and storage capacity”; the memory of Bauer can be a first portion of the “storage capacity”) configured to store a first learned model (paras. [0055], [0058] and [0132] discuss using machine learning algorithms to detect and track objects and the worker’s hand(s); it is well known in the art that all machine learning algorithms use a machine learning model that is typically implemented as software executed by a processor; such a machine learning model constitutes the first learned model of claim 1), wherein the first learned model outputs a plurality of objects (Fig. 6, para. [0133]: “[t]he determined position of the object is now integrated into the control system of the industrial manufacturing plant for the production of the end product (step 63). Depending on the object, position information of a tool, a person, a device of transport, a machine tool, and/or a workpiece collection point unit is thus available for the control of the industrial manufacturing plant”) that define each of a plurality of work states forming one process of a manufacturing work of a product (para. [0046]: “[t]he concept of image-based tracking disclosed herein allows continuous object tracking during a multi-stage manufacturing process within a manufacturing hall and can thus be easily used in intralogistics applications”; see also para. [0009] discussing controlling the sequence of manufacturing process steps while tracking the workpiece being worked on: “wherein the processing steps include individual or several of the following operations: cutting, in particular laser cutting, punching, bending, drilling, threading, grinding, joining, welding, riveting, screwing, pressing, treating the edges and surfaces”), with respect to a determination target image, and the determination target image includes a hand image that is an image of a hand of a worker associated with the manufacturing work of the product (paras. [0160]-[0162] discussing image-based tracking of the worker’s hand relative to the workpiece: “[i]f the ‘hand’ of an operator removes a component from the remaining grid, the component location is booked from the remaining grid to the hand in the MES. If the hand moves near a workpiece collection point unit, the MES records that this part has been deposited at the corresponding workpiece collection point unit. On the one side, the tracking system can detect that the hand came close to the workpiece. On the other side, a higher-level system (e.g., the MES) can link the workpiece collection point unit and the position of the hand.”); and
a processor (para. [0026], detection logic of the MES 3 and/or of the image-acquisition device-based tracking system 5 of the manufacturing control system 1 shown in Fig. 1 includes “position recognition system with an object recognition unit” that constitutes a processor) configured to:
detect the plurality of objects with respect to the determination target image by using the first learned model (see the above application of Bauer to the limitations of the memory and the first learned model; see also paras. [0037], [0055], [0058], [0160] and [0162], the “position recognition system with an object recognition unit” can be implemented by the first learned model for tracking a plurality of objects, such as tools and the workpiece with respect to the worker’s hands); and
determine, based on the plurality of objects, a work state of the plurality of work states indicated by the determination target image (para. [0026], the determination of the work state is made by logic comprising the “image sequence analysis unit” and the “image sequence change unit” executing “image sequence analysis software” of the MES 3 and/or of the image-acquisition device-based tracking system 5 of the manufacturing control system 1 shown in Fig. 1; paras. [0154]-[0157], the position of the worker’s hand, the position of the workpiece and/or the position of the tool are used to determine the current stage, or state, of the manufacturing process of a plurality of work states: “[a]nother usage scenario concerns the recording of process states that are characterized by the positions of workpieces, people, machines, and other operating resources and that can be captured by a cognitive evaluation of these measured positions”)
control display of the determination target image (the BRI for “determination target image” provided by the examiner in the nonfinal Office Action is that it means an image of the hand(s) of the worker during the manufacturing process. Para. [0162] of Bauer discloses tracking of the hand of a person performing a work state process and tracking the component held in the hand. Para. [0104] of Bauer discloses displaying the moving elements being tracked, such as the “workpieces, transport carriages, operators”. Para. [0154] of Bauer discusses that the object being tracked can be a hand tool and tracking the hand tool and how it is used. Para. [0023] of Bauer discusses “displaying information on the position and/or the manufacturing status of the object to be tracked”. Therefore, Bauer does disclose the limitation of claims 1 and 7 of controlling display of the determination target image); and
generate visual indicators overlaid on the displayed determination target image, wherein the visual indicators are overlaid to identify the detected plurality of objects and the work state corresponding to each of the plurality of objects, and the visual indicators corresponding to the work state are updated in real time as the work state changes during the manufacturing work (the displayed information discussed in paras. [0023], [0104] and [0154] of Bauer cited above relating to detected and tracked hand(s) and object(s) constitute visual indicators, but Bauer does not explicitly disclose that those visual indicators are overlaid on the displayed target image).
To overlay displayed target information with visual indicators that provide additional information about the target being displayed is well known in the art. Wolf, in the same field of endeavor, discloses detecting a surgical tool and hand gestures of the surgeon (Col. 105, lines 36-49) in captured video images at different stages of the surgery and overlaying the video images with visual indicators (“markers”) indicating the work state (Col. 210, lines 42-53: “accessing at least one video of a surgical procedure causing the at least one video to be output for display overlaying on the at least one video outputted for display a surgical timeline, wherein the surgical timeline includes markers identifying at least one of a surgical phase”; see also Fig. 4 and Col. 19, line 46-Col. 22, line 29 discussing overlaying the video with different types of visual indicators; since the markers in Wolf are part of the surgical timeline and identify the surgical phase, they are updated in real time as the surgical phase changes during the surgery).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the manufacturing control system 1 of Bauer to cause the visual indicators identifying the detected objects and the corresponding work states that are determined in Bauer to be overlaid on the displayed determination target image of Bauer and to be updated in real-time as the work states change as taught by Wolf. One of ordinary skill in the art would have been motivated to make the modification to improve monitoring of the workstations and controlling the manufacturing process disclosed in Bauer. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods (e.g., modifying the software executed by logic of the MES 3 of Bauer) to yield predictable results.
Regarding claim 7, this claim recites the same steps that are recited in claim 1, and therefore the rejection of claim 1 applies mutatis mutandis to claim 7.
Claims 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over Bauer in view of Wolf and further in view of U.S. Publ. Appl. No. 2025/0124282 A1 to Wen et al. (hereinafter referred to as “Wen”).
Regarding claim 2, as indicated above, Bauer discloses that the processor determines, based on the detected plurality of objects, a work state of the plurality of work states. However, Bauer does not explicitly disclose that the memory is configured to store a second learned model that is used by the processor to output information indicating one of the plurality of work states. As also indicated above with reference to the first learned model limitation of claim 1, Bauer discloses that machine learning algorithms can be stored in a storage capacity and used to perform the operations recited in claim 1 as performed by the processor.
It is well known in the art that separate machine learning models or a single model can be used to perform separate algorithms such as object detection and object tracking algorithms. Wen, in the same field of endeavor, discloses an image classification system that stores at least first and second machine learning models locally and uses them to perform different algorithms (Figs. 1 and 2, paras. [0018]-[0030]). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to further modify the manufacturing control system 1 of Bauer as modified by Wolf based on the teachings of Wen to use first and second learned models and to use the second learned model to output information indicating one of the plurality of work states based on the object detected. One of ordinary skill in the art would have been motivated to make the modification to allow the operations of detecting the object contained in the target image and determining the work state based on the detected object to be performed in parallel to decrease processing overhead and time. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods (e.g., modifying the software executed by logic of the MES 3 of Bauer) to yield predictable results.
Regarding claim 3, to the extent that the same limitations are recited in claims 2 and 3, the rejection of claim 2 above applies to claim 3 mutatis mutandis. The only limitation recited in claim 3 that is not also recited in claim 2 is that the processor uses the second learned model to also determine position coordinates of each of the plurality of objects. Logic of the MES 3 of Bauer also determines position coordinates of the plurality of detected objects (paras. [0086], [0089] and [0104]: “[i]n the digital site plan 25, therefore, not only stationary elements (machine tools) but also moving elements (workpieces, transport carriages, operators) are displayed. The integration of moving elements into the site plan is made possible by the image acquisition device-based position determination, for example by recording the movement of the transport carriages 21, workpieces 23, and operators 31 as objects to be tracked with the tracking system”; since positions are being determined and recorded, this means that position coordinates are being determined and recorded).
Regarding claim 4, Bauer discloses that the position coordinates determined by the processor indicate a relative position with respect to a landmark (para. [0162] discusses determining the position of a worker’s hand relative to the object (e.g., the workpiece) and the relative position of the worker’s hand to a landmark (e.g., the workpiece collection unit); since positions are being determined, this means that position coordinates are being determined).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Bauer in view of Wolf and further in view of U.S. Publ. Appl. No. 2007/0282480 A1 to Pannese et al. (hereinafter referred to as “Pannese”).
Bauer does not explicitly teach that the processor uses a state transition model representing an anteroposterior relationship among the plurality of work states to determine the work state indicated by the determination target image.
Pannese, in the same field of endeavor, discloses using a state transition model representing an anteroposterior relationship among a plurality of states of a semiconductor manufacturing process: (Fig. 4, state machine 400, para. [0076]: “[a] state machine 400 may include a number of states including a first state 402, a second state 404, and a third state 406. The states may represent, for example, the states of an isolation valve (e.g., open or closed), positions of a robotic arm (e.g., location x, y, z, etc.), status of a buffer station, or any other state or combination of states in a semiconductor manufacturing process. Each change from one state to another state occurs through a transition, such as a first transition 410. It will be noted that each state may have one or more transitions into and out of that state. This may be, for example, a control signal or a sensor output that triggers a response by an item of hardware to transition to a different state.”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to further modify the system 1 of Bauer as modified by Wolf further based on the teachings of Pannese to incorporate the state transition model 400 of Pannese for use in determining the work state since state transition models are configured to keep track of transitions into and out of states as well as the current state. State transition models are commonly used for this purpose. One of ordinary skill in the art would have been motivated to make the modification to determine the current work state and when a transition has been made into or out of a particular state to ensure that actions that are intended to occur in each state occur with precise timing (see paras. [0100] and [0102] of Bauer discussing the triggering of actions in various states of the manufacturing processes). The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods (e.g., modifying the software executed by logic of the MES 3 of Bauer) to yield predictable results.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Bauer in view of Wolf and further in view of U.S. Publ. Appl. No. 2024/0310851 A1 to Ebrahimi Afrouzi et al. (hereinafter referred to as “Afrouzi”).
Bauer does not explicitly teach that the processor determines the work state indicated by the determination target image based on a cumulative result of past determination results. Afrouzi, in the same field of endeavor, discloses a processor of a robot that, when the robot is executing a movement path, assigns rewards as the robot takes actions to transition between states and uses the net cumulative reward to evaluate a movement path. Paths taken by the robot iteratively evolve based on the cumulative award to avoid paths that, in the past, resulted in a low net cumulative reward (Afrouzi, para. [1160]).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the system 1 of Bauer as modified based on the teachings of Wolf further based on the teachings of Afrouzi such that the processing logic of the MES 3 of the system 1 uses the cumulative result of past determination results to determine the work state indicated by the determination target image. One of ordinary skill in the art would have been motivated to make the modification to improve the accuracy of a determination result by taking into account the cumulative result of past determination results. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods (e.g., modifying the software executed by logic of the MES 3 of Bauer) to yield predictable results.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Publ. Appl. No. 2020/0413011 A1 to Zass et al. is directed to systems and methods for controlling image acquisition robots in construction sites that identify object discrepancies and display a depiction of the object having the identified discrepancies along with the visual indications of the discrepancies that are overlayed on the displayed image of the object.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL J SANTOS whose telephone number is (571)272-2867. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL JOSEPH SANTOS/Examiner, Art Unit 2667
/MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667