DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicant
The following is a Final Office action. In response to Examiner’s Non-Final
Rejection of 9/29/25, Applicant, on 12/24/25, amended claims. Claims 1, 2, 4, 6-9 are pending in this application and have been rejected be.
Response to Amendments
The Claim Interpretation section is removed in light of the amendments, 112(f) is no longer invoked.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 1/20/26 are being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-2, 4, and 6-9 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Independent claim 1 now recites “detect positions of joints of the worker in the consecutive images, determine movement amounts and movement directions of positions of a joint representing a torso of the worker between images, and determine a time at which the worker is in the movement when the movement amounts are greater than or equal to a predefined threshold and the movement directions are consistent, and determine a time at which the worker is stationary when the movement amounts are below the predefined threshold, or the movement directions are not consistent.” Examiner is not sure where the support for analyzing “consistent” motions. Examiner suggests explaining the support and/or amending/cancelling the claim limitations at issue.
Claims 6 and 7 recite similar limitations and are rejected for the same reasons.
Claims 2, 4, and 8-9 depend from the independent claims and are rejected for the same reasons.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1-2, 4, and 6-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1-2, 4, and 6-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential steps, such omission amounting to a gap between the steps. See MPEP § 2172.01. The omitted steps are: it is unclear which steps are in the alternative in the independent claims, and which steps are not. The limitation recites ““detect positions of joints of the worker in the consecutive images, determine movement amounts and movement directions of positions of a joint representing a torso of the worker between images, and determine a time at which the worker is in the movement when the movement amounts are greater than or equal to a predefined threshold and the movement directions are consistent, and determine a time at which the worker is stationary when the movement amounts are below the predefined threshold, or the movement directions are not consistent.” Is the claim stating that the “time” for worker being stationary can be based on only the alternative of “movement directions are not consistent”? Or is the “or the movement directions are not consistent” intended to refer to a scenario of worker having movement amounts greater than a threshold but “not consistent?” Examiner is not sure what is intended here as Examiner is also not sure where the support for “consistent” is from; perhaps it is intended to be “and” the movement directions are not consistent at the end so that it only relates to the time the worker is “stationary”?
Claims 6 and 7 recite similar limitations and are rejected for the same reasons.
Claims 2, 4, and 8-9 depend from the independent claims and are rejected for the same reasons.
Reasons for Subject Matter Eligibility under 35 USC 101
The claims overcome the 101 rejections because the independent claims 1, 6, and 7 [as best understood in light of the 112 rejections] are now : a plurality of cameras configured to capture consecutive images of a worker; a controller configured to acquire captured consecutive images of the worker from the plurality of cameras; detect positions of joints of the worker in the consecutive images, determine movement amounts and movement directions of positions of a joint representing a torso of the worker between images. When viewing the claim as a whole, the claim as a whole is not directed to an abstract idea; and the limitations, in combination, are viewed as a practical application under step 2a, prong 2, as the claim is viewed as using a judicial exception in a meaningful way under MPEP 2106.05(e). The same reasons also apply to claim 6 and 7 which have similar limitations.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4, and 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Bortolini, “Motion Analysis System (MAS) for production and ergonomics assessment in the manufacturing processes,” 2020, Computers & Industrial Engineering Vol. 139, 105485, pages 1-13, and Kiran (US 12,109,015).
Concerning claim 1, Bartolini discloses:
A work recognition device (Bartolini – see page 3, Section 3 and 3.1 – MAS (Motion Analysis System) is an original hardware/software architecture conceived for the analysis of human manufacturing and assembly systems; The hardware structure of MAS is constituted by a Wi-Fi network with up to four depth cameras connected each one via USB port to dedicated PCs. The adopted PCs have to be equipped with high-performance graphic cards which allow to process the huge data flow of images acquired from each camera during MOCAP operations. Between the PCs, one acts as master while the others are the slaves. This system configuration, depicted in Fig. 2, allows to synchronize the four image flows thanks to the Wi-Fi communication between the master and slave PCs.), comprising:
a plurality of cameras configured to capture consecutive images of a worker (Bartolini – see page 3, Section 3.1 – (Motion Analysis System) is an original hardware/software architecture conceived for the analysis of human manufacturing and assembly systems; The hardware structure of MAS is constituted by a Wi-Fi network with up to four depth cameras connected each one via USB port to dedicated PCs);
a controller configured to: (Bartolini – see page 3, Section 3 and 3.1 – MAS (Motion Analysis System) is an original hardware/software architecture conceived for the analysis of human manufacturing and assembly systems; The hardware structure of MAS is constituted by a Wi-Fi network with up to four depth cameras connected each one via USB port to dedicated PCs; see also page 5, section 3.2 software architecture
see also Kiran col. 28, lines 1-35; col. 28, lines 52-65 - a computer program product including computer executable instructions for executing the process steps in the claims below and stored on a computer readable medium is included within the present invention.)
acquire the captured consecutive images of the worker from the plurality of cameras (Bartolini – see page 3, Section 3 – MAS (Motion Analysis System) is hardware/software architecture for analysis of human manufacturing and assembly; aim is to analyze the human work providing the production management with a report from productive (time and space) and ergonomic view (See FIG. 1); see page 3, Section 3.1, 2nd paragraph – depth cameras are Microsoft Kinect … used for human tracking in industrial applications; see page 5, section 3.2, 1st paragraph – software providing dynamic analysis of working operator conceived to elaborate the human body digitalization coming from the depth camera (see Kinect in section 3.1); digitalization of bodies consists in the process of recording the movement of their skeletons; resulting output file stores positions of all body joints over time; See page 5, col. 1, 3rd paragraph - position vectors (X, Y, Z) of each joint is listed and stored frame by frame providing a dynamic representation of all the movements executed by the operator);
acquire work procedure work procedure information relating to a predefined work procedure of a series of tasks to be performed by the worker at a plurality of workbenches (Bartolini – see page 5, section 3.2, 2nd paragraph - The set of joints together with their position vector are the necessary information to analyse the manufacturing/assembly process together with additional information regarding the area in which the operator works and the product he manufactures or assembles; page 5, col. 1, last paragraph – col. 2 - Information of the product to be assembled or manufactured: the product components, dimension and weight (in synthesis the product BOM, e.g. bill of materials); • Information of the tools necessary for the manual operations: position of tools, their dimension and weight; • Relation between components and tools used for the final product manufacturing; see page 9, FIG. 9 – assembly station with worker also includes rear racks, Europallet, and trolley);
detect positions of joints of the worker in the consecutive images (Bortolini – see page 7, col. 1, 3rd paragraph - position vectors (X, Y, Z) of each joint is listed and stored frame by frame providing a dynamic representation of all the movements executed by the operator; see page 7, section 3.2.1 – control volume analysis; see Footnote 1 - In the MAS environment, the analyst can define 3D control volumes within the workplace (defining their dimensions and 3D position) to achieve an in-depth statistic about the locations most visited by the worker hands over time. Creating and placing control volumes on the workbenches, in the picking positions, within specific shelfs or racks or in whatsoever location, allows to distinguish between the operator picking or travelling time (no added-value activity) and the manufacturing time), determine movement amounts and movement directions of positions of a joint representing a torso (Bortolini – see page 6, FIG. 6 – skeleton joints of acquired human body include “chest, shoulders”; see page 7, col. 1, 3rd paragraph - The position vectors (X, Y, Z) of each joint is listed and stored frame by frame providing a dynamic representation of all the movements executed by the operator.
See also Kiran – see Col. 19, lines 32-48 - FIG. 27 shows the rapid upper limb assessment (RULA) method used to calculate an ergonomic score, in accordance with prior art. RULA is an assessment technique used to understand risk of the upper extremities, the neck, the body trunk, and legs when performing a work motion or task) of the worker between images (Applicant’s Admitted Prior Art – [0044] as published states “A publicly known technique known as "OpenPose", which is described in Reference Document 1 mentioned below, may be used as a method for identifying the joints of the worker W. With OpenPose, skeletal information of a worker W may be detected from a captured image.”
See also Bortolini – see page 5, Section 3.2 – depth camera (Kinect) ; digitalization of their bodies consists in the process of recording the movement of their skeletons. The resulting output file stores the positions of all body joints over time; see page 6, FIG. 6 – skeleton joints of acquired human body include “chest, shoulders”; see page 7, section 3.2.1 – control volume analysis; see Footnote 1 - In the MAS environment, the analyst can define 3D control volumes within the workplace (defining their dimensions and 3D position) to achieve an in-depth statistic about the locations most visited by the worker hands over time. Creating and placing control volumes on the workbenches, in the picking positions, within specific shelfs or racks or in whatsoever location, allows to distinguish between the operator picking or travelling time (no added-value activity) and the manufacturing time; see page 9, col. 2, 3rd paragraph -Concerning the different activities performed by the operator during the cycle time, MAS evaluates for each control volume the number of visits and their duration for both the operator hands. This analysis is adopted to assess and to distinguish between the time spent by the operator to execute added-value activities (assembly tasks) and picking/travelling activities), and determine a time at which the worker is in the movement when the movement amounts …, and determine a time at which the worker is stationary when the movement amounts… (Bortolini – see page 7, col. 1, 2nd paragraph - evaluate the performance of manual operations during an assessment trial quantifying the productivity and the ergonomics of a workstation. The productive viewpoint is assessed through a dynamic analysis of the operator movements in relation to the workplace layout in which the tasks are executed (manufacturing activities, task execution time, component locations, workspace usage, racks or workbenches utilization, hands position, etc; see page 9, Section 5 – “cycle time partition between the different working activities”).
Bartolini discloses analyzing operator movements to distinguish cycle times in working activities (See page 7; page 9) and considering percent of time assembling, walking, or picking (See page 9, col. 2, 3rd paragraph), and highlighting possible productive and ergonomic aspects of possible improvements (e.g. workstation layout, location of tools or components) (See page 12, col. 1).
Kiran discloses analyzing worker duties “at a work station” and cycles of activities – filling a box, taping it, moving the box to a pallet for shipping, as a “full cycle” disclosing “movement directions are consistent” (as best understood in light of 112b rejections) (Kiran – See col. 14, lines 15-28 - FIG. 17 is chart illustrating logical flow associated with ergonomics scoring carried out by the analytics platform of FIG. 13, in accordance with an embodiment of the present invention. These pertinent concepts can be applied to perform ergonomic evaluation of the radial/ulnar angle, flexion/extension angle, and pronation/supination angle parameters of the subject's wrist motions, such as shown in FIG. 24; These pertinent concepts include high intensity 1702, small duty cycle 1704, and high frequency 1706 and are applied to such parameters of the subject's joint motions to calculate a safety score; See Col. 17, lines 1-35, FIG. 21 – The task with respect to the method of FIG. 21 means a full cycle of performing the duties at a work station—for example, this can involve taking an empty box, filling the box with items hand-picked off of a conveyor belt, taping up the box, moving the box to a pallet for shipping, that is all together a full cycle—a task. The worker then repeats this task—what was named a sample above—over and over again during the full 8-hour workday. A sample then consists of a set of time series data from the wearable sensors which is limited to a single task; “feature 2 :number of signal zero-axis crossings” in sample of motion; see also 2120)
PNG
media_image1.png
744
792
media_image1.png
Greyscale
Radwin discloses the amended claim with the thresholds as best understood in light of 112b rejections:
detect positions of joints of the worker in the consecutive images, determine movement amounts and movement directions of positions of a joint representing a torso (Radwin – See par 84, FIG. 4 - FIG. 4 depicts a subject 2 having a trunk angle, A.sub.T. The trunk angle, A.sub.T may be defined by an imaginary line or plane, P.sub.T, extending through a spine of the subject 2 if the spine of the subject 2 were straight and a line or plane, P.sub.vertical, perpendicular to a line or plane, P.sub.horizontal, of or parallel to a surface supporting the subject 2. When determining a trunk angle and/or trunk kinematics, a distinction may be made between a trunk flexion angle, a spine flexion angle, and/or other suitable trunk angles. see par 93 - values of trunk position parameters of a subject may be determined 116 from the received data in addition to values of a trunk angle. For example, values of trunk kinematics including, but not limited to, values of a trunk velocity of the subject (e.g., a value of a velocity of movement of a trunk of the subject performing a task), values of a trunk acceleration of the subject (e.g., a value of an acceleration of movement of a trunk of the subject performing a task) and determine a time at which the worker is in the movement when the movement amounts “are greater than or equal to a predefined threshold and the movement directions are consistent,” (Radwin – see par 93 - values of trunk position parameters of a subject may be determined 116 from the received data in addition to values of a trunk angle. For example, values of trunk kinematics including, but not limited to, values of a trunk velocity of the subject (e.g., a value of a velocity of movement of a trunk of the subject performing a task), values of a trunk acceleration of the subject (e.g., a value of an acceleration of movement of a trunk of the subject performing a task; see par 94-95 - an exponential equation, to trunk angles (e.g., trunk angles during a lift) may facilitate estimating trunk angles over a series of consecutive video frames (e.g., dynamically), along with calculating trunk kinematics (e.g., trunk speed/velocity, acceleration, etc.) over a series of video frames. variables for determining or predicting the coefficients, α, β, of the exponential equation may include, but are not limited to, the average, maximum, and standard deviations for features in the received and/or calculated data, along with the respective speed and acceleration over the set of frames depicting the subject performing the lift. In some cases, the coefficients, a, β, may differ based on a posture or positioning of the subject (e.g., whether the subject is stooping, squatting, or standing; see par 149, FIG. 17 - As discussed above, the monitoring or tracking system 10 may compare successive frames of the video by comparing corresponding pixels of the successive frames and/or by comparing the frames in one or more other manners. Once the subject has been identified, a beginning of an event of interest and an ending of the event (disclosing consistent) of interest may be determined 404; When the event of interest involves a lifting task, the subject may be tracked from a location at which an object is picked up (e.g., loaded) until a location at which the object is set down (e.g., unloaded) (disclosing “worker in movement with movement amounts greater than or equal to predefined threshold (zero) and movement directions are consistent) and determine a time at which the worker is stationary “when the movement amounts are below the predefined threshold, or the movement directions are not consistent” (Radwin – see par 141-142 – if subject 2 and/or object 4 of interest stops moving for a set number of frames, subject may be absorbed into background; see par 143-144 - Because the feet may not move fast from frame-to-frame for a conventional video frame rate (e.g., a frame rate in a range from 15 frames per second (fps) to 30 fps), the difference between the feet location of a silhouette 40 in the current frame and that of the previous frame may be expected to be small (e.g., as measured in change of pixel locations from frame-to-frame), with an average of about zero (0) pixels. As such, a plausible location for a feet portion of the silhouette 40 in the current frame may be defined by one or more pixels extending from the feet location of the silhouette 40 in a previous frame; see par 150 - the monitoring or tracking system 10 may identify or extract parameter values from the video including, but not limited to, frequency (e.g., from the horizontal location tracking), speed (e.g., an amount of time between a beginning of an event and an end of the event), acceleration, and/or other parameter of the subject during the event of interest; Based on these parameters, … the monitoring or tracking system 10 may determine a trunk angle of the subject… and/or perform one or more other assessments (e.g., injury risk assessments and/or other suitable assessments) of movements of the subject during the event of interest).
Bartolini, Kiran, and Radwin disclose:
set breaks between tasks of the series of tasks based on the work procedure information and the time at which the worker is in movement and the time at which the worker is stationary (Kiran – see col. 6, lines 1-27 – 1) understanding worker motion associated with both untrained and trained workers to accelerate and optimize a training process; 2) assessing and improving the ergonomic safety of a job or individual worker; see col. 17, lines 35-67, FIG. 21 - The engineer now needs to decide on a distance metric between this new point and the existing point cloud 2122. Some ideas are cosine similarity (where a mean of the point cloud is used) or a Kullback-Leibler (KL) divergence but a person skilled in the art can create the best fitting norm. Whatever metric, it is computed and plotted 2120. As the window slides over the data (the user decides the window size and how much it slides by), the norm becomes small in some regions and larger in other regions 2120. Since the norm represents a distance, we find a cycle when it takes the smallest values. We can leverage additional information about when the cycles occur in that we know they are periodic in nature which can help smooth out the results more and give more accurate conclusions. Having collected the cycles, various scoring statistics can be gathered 2124 such as how long time the specific cycle took, what the angular range explored was—the more extreme the less productive, etc;
See also Radwin – see par 150 - the monitoring or tracking system 10 may identify or extract parameter values from the video including, but not limited to, frequency (e.g., from the horizontal location tracking), speed (e.g., an amount of time between a beginning of an event and an end of the event); monitoring or tracking system 10 may be configured to capture and/or receive video in real time during an event of interest and perform real time processing and/or assessments, in accordance with the approach 500 and as discussed herein, with the goal of preventing injuries and/or mitigating injury risks during the event of interest.); and
output break information relating to the breaks between the tasks (Kiran – see col. 6, lines 1-27 - 6) optimizing job rotations and breaks based upon worker motion performance and speed; See col. 22, lines 58-67, col. 23, lines 1-12 - the data may be used with respect to the “Administrative” step 3508. The “Administrative” step pertain to changing the methods and habits of a worker, with the end goal to limit the exposure of the worker to the hazard. Here, the motion data collected from the wearable device may be used to inform modifications and enforcement of breaks, rotations, and stretches that follow best practices to avoid musculoskeletal injuries. Breaks, rotations, and stretches may also be optimized based on exposure levels and fatigue levels indicated by the collected data. The collected data and the overall analytics platform infrastructure can be used to help train workers on how to safely and productively move and operate in a facility from the first day they are hired.).
Bortolini, Kiran, and Radwin are analogous art as they are directed to using motion data to analyze workers performing tasks (See Bortolini Abstract, page 3, section 3; Kiran Abstract; Radwin Abstract, par 60-65 (data recorder 23 e.g. image capturing device). 1) Bartolini discloses analyzing operator movements to distinguish cycle times in working activities (See page 7; page 9) and considering percent of time assembling, walking, or picking (See page 9, col. 2, 3rd paragraph) and highlighting possible productive and ergonomic aspects of possible improvements (e.g. workstation layout, location of tools or components) (See page 12, col. 1). Kiran improves upon Bortolini by looking at zero-axis crossing in motion data for detecting duty cycle of tasks at a work station including minimal motion movement (see FIG. 21, 2120) for detecting beginning and end of a task duty cycle and using the data to modify breaks and rotations for workers (See col. 6, lines 1-27; col. 17; See col. 22, lines 58-67, col. 23, lines 1-12). One of ordinary skill in the art would be motivated to further include assessing motion data to distinguish beginning and end of task cycles and recommending when users take breaks to efficiently improve upon the image analysis of worker joints for ergonomic possible improvements in Bortolini. 2) Bortolini discloses one of the joints is a “chest” (See page 6, FIG. 6) and operator “travelling time” to different racks (See page 7) and time spent doing assembly tasks in a cycle (See page 9, col. 2), and Kiran discloses the presence of a “body trunk” with regards to ergonomic risk (See col. 19) and assessing a cycle of activities – duties “at a work station” and “moving the box” to shipping (see col. 17). Radwin improves upon Bortolini and Kiran by using trunk movement as calculation basis for movements with regards to a task (see par 93), looking for trunk velocity over a series of video frames (See par 94-95) and looking at time between a beginning and an end of an event along with movements of the subject in comparison to subject stopping movement for a number of frames (See par 141-144, 149) which is used for injury risk assessment (See par 150). One of ordinary skill in the art would be motivated to further include trunk movement for tasks along with time for events, stopping of feet movement in frames of videos, for injury risk assessment, to efficiently improve upon the image analysis of worker joints, that includes a “chest”, for ergonomic possible improvements in Bortolini and the analysis of motion data for determining breaks for workers that uses a ”body trunk” in Kiran.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the camera analysis of workers performing manufacturing in Bortolini to further use motion data to analyze when breaks for workers can occur as disclosed in Kiran, and to further analyze trunk movements for tasks, looking at video frames when feet are not moving and tracking time durations from videos during tasks as disclosed in Radwin since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success.
Concerning independent claim 6, Bortolini and Kiran disclose:
A work recognition method including a computer executing processing comprising (Bartolini – see page 3, Section 3 and 3.1 – MAS (Motion Analysis System) is an original hardware/software architecture conceived for the analysis of human manufacturing and assembly systems; The hardware structure of MAS is constituted by a Wi-Fi network with up to four depth cameras connected each one via USB port to dedicated PCs. The adopted PCs have to be equipped with high-performance graphic cards which allow to process the huge data flow of images acquired from each camera during MOCAP operations. Between the PCs, one acts as master while the others are the slaves. This system configuration, depicted in Fig. 2, allows to synchronize the four image flows thanks to the Wi-Fi communication between the master and slave PCs; see also page 5, section 3.2 software architecture; see also Kiran col. 28, lines 1-35; col. 28, lines 52-65 - a computer program product including computer executable instructions for executing the process steps in the claims below and stored on a computer readable medium is included within the present invention).
The remaining limitations are similar to claim 1 above. It would be obvious to combine Bortolini and Kiran for the same reasons as claim 1.
Concerning independent claim 7, Bortolini and Kiran disclose:
An information processing apparatus (Bartolini – see page 3, Section 3 and 3.1 – MAS (Motion Analysis System) is an original hardware/software architecture conceived for the analysis of human manufacturing and assembly systems; The hardware structure of MAS is constituted by a Wi-Fi network with up to four depth cameras connected each one via USB port to dedicated PCs. The adopted PCs have to be equipped with high-performance graphic cards which allow to process the huge data flow of images acquired from each camera during MOCAP operations. Between the PCs, one acts as master while the others are the slaves. This system configuration, depicted in Fig. 2, allows to synchronize the four image flows thanks to the Wi-Fi communication between the master and slave PCs; see also page 5, section 3.2 software architecture; see also Kiran col. 28, lines 1-35; col. 28, lines 52-65 - a computer program product including computer executable instructions for executing the process steps in the claims below and stored on a computer readable medium is included within the present invention) comprising:
The remaining limitations are similar to claim 1 above. It would be obvious to combine Bortolini and Kiran for the same reasons as claim 1.
Concerning claims 2, 8, and 9, Bortolini and Kiran and Radwin disclose:
The work recognition device according to claim 1, wherein:
the processor is further configured to:
compare a current state determined from a current set of images with a previous state determined from a previous set of images (Bartolini - See page 5, col. 1, 3rd paragraph - position vectors (X, Y, Z) of each joint is listed and stored frame by frame providing a dynamic representation of all the movements executed by the operator) and detect a state switch of the worker between a movement state and a stationary state based on the time at which the worker is in movement and the time at which the worker is stationary (Bortolini – see page 7, col. 1, 2nd paragraph – analysis of operator movement in relation to workplace layout in which tasks are executed includes “task execution time”; see page 7, col. 2, 3rd-4th paragraphs – cycle time horizon for performing assembly tasks;
see also Kiran – see col. 10, lines 51-61 – productivity-indicating score includes “time it task to finish a given task”; see col. 14, lines 1-14 - s part of the analysis, the method leverages machine learning techniques to identify cycles in the measurements of a subject or group of subjects. In particular, the machine learning techniques analyze the data across multiple measurements and features to identify cycles in repetitive tasks (red box delineation in the top left plot to the right side) and to identify sub-tasks within these cycles (blue delineation in the top left plot to the right side);
in response to the detected state switch indicating that the worker has switched from the movement state to the stationary state, then set a time of the detected state switch as a start time of a task (Kiran – see col. 17, lines 1-30 - The task with respect to the method of FIG. 21 means a full cycle of performing the duties at a work station—for example, this can involve taking an empty box, filling the box with items hand-picked off of a conveyor belt, taping up the box, moving the box to a pallet for shipping, that is all together a full cycle—a task. The worker then repeats this task—what was named a sample above—over and over again during the full 8-hour workday. A sample then consists of a set of time series data from the wearable sensors which is limited to a single task. There can be slight variations in the way each task is performed which explains the collection of multiple repeated tasks. Then, a feature matrix is constructed for each sample 2106. The features are engineered to maximize detecting the cycles in the data;); and
in response to the detected state switch indicating that the worker has switched from the stationary state to the movement state, then set the time of the detected state switch as an end time of a task (Kiran – see col. 17, lines 1-30 - The task with respect to the method of FIG. 21 means a full cycle of performing the duties at a work station—for example, this can involve taking an empty box, filling the box with items hand-picked off of a conveyor belt, taping up the box, moving the box to a pallet for shipping, that is all together a full cycle—a task. The worker then repeats this task—what was named a sample above—over and over again during the full 8-hour workday. A sample then consists of a set of time series data from the wearable sensors which is limited to a single task; col. 17, lines 45-67 - Since the norm represents a distance, we find a cycle when it takes the smallest values. We can leverage additional information about when the cycles occur in that we know they are periodic in nature which can help smooth out the results more and give more accurate conclusions. Specifically, when a cycle has been identified in a given region we don't look further there. Having collected the cycles, various scoring statistics can be gathered 2124 such as how long time the specific cycle took),
wherein the start time and the end time are recorded as the break information (Kiran - See col. 22, lines 58-67, col. 23, lines 1-12 - The break, rotation, or stretch may be then be monitored through further motion data, collected via the wearable device, to ensure that it was taken by the worker, to ensure the appropriate stretches occurred, and to ensure that the rotation to the appropriate workstation occurred. Breaks, rotations, and stretches may also be optimized based on exposure levels and fatigue levels indicated by the collected data. The collected data and the overall analytics platform infrastructure can be used to help train workers on how to safely and productively move and operate in a facility from the first day they are hired; see col. 24, lines 9-31 – measure of operational performance may be total amount of time that workers are moving compared to total amount of time where workers are not moving).
It would be obvious to combine Bortolini and Kiran and Radwin for the same reasons as claim 1.
Concerning claim 4, Bortolini and Kiran disclose:
The work recognition device according to claim 1, wherein the processor determines the time at which the worker is in movement and the time at which the worker is stationary based on movement amounts of a central position of a circumscribed area circumscribing all identified joints of the worker (Bortolini – see page 3, section 3.1 – system configuration in FIG. 2 synchronizes four image flows; cameras have two parallel sensors for a best depth evaluation; features and performance summarized in FIG. 3 – FOV (field of view); skeleton detectable joints; The depicted spatial field of view of the cameras is the result of an experimental campaign aimed at the investigation of the operating limits of the adopted hardware; As result of the on field analysis, the position of the cameras must be carefully chosen to maximise the acquisition precision and the industrial area covered; These configurations are defined ideal because in these conditions the best measurement precision for a skeleton acquisition can be achieved; see page 5, Section 3.2 – depth camera (Kinect) ; digitalization of their bodies consists in the process of recording the movement of their skeletons. The resulting output file stores the positions of all body joints over time; see page 7, section 3.2.1 – control volume analysis; see Footnote 1 - In the MAS environment, the analyst can define 3D control volumes within the workplace (defining their dimensions and 3D position) to achieve an in-depth statistic about the locations most visited by the worker hands over time. Creating and placing control volumes on the workbenches, in the picking positions, within specific shelfs or racks or in whatsoever location, allows to distinguish between the operator picking or travelling time (no added-value activity) and the manufacturing time;).
It would be obvious to combine Bortolini and Kiran and Radwin for the same reasons as claim 1.
Response to Arguments
Applicant's arguments filed 12/24/25 have been fully considered but they are not persuasive and/or are moot in view of the new rejections.
Applicant argues new amended limitations; the arguments are moot in view of new rejections and new citations necessitated by the amendments.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 2019/0343429 Elhawary – directed to monitoring safety and productivity of physical tasks (See Abstract)
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN R GOLDBERG whose telephone number is (571)270-7949. The examiner can normally be reached 830AM - 430PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IVAN R GOLDBERG/ Primary Examiner, Art Unit 3619