DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-7 have been reviewed and are under consideration by this office action.
Notice to Applicant
The following is a Non-Final Office action. In response to Examiner’s Non- Final Rejection, Applicant amended claims. Arguments and amendments are persuasive regarding the 103 Rejections resulting in a second Non-Final action. Claims 1-7 are pending in this application and have been rejected below.
Response to Amendment
Applicant’s amendments are received and acknowledged.
The amended claims overcome the 112 Rejections and are therefore withdrawn.
Response to Arguments - 35 USC § 101
Applicant’s arguments with respect to the 35 USC 101 rejections have been fully considered, but they are not persuasive.
Applicant contends that claims have been amended to include “a plurality of sensors” and “outputting…” and as such are not Mental Processes.
Examiner respectfully disagrees. The cited limitations are each additional elements which are performing the steps would be no more than mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h).
Applicant contends that the models of the claims are specific technical components which cannot be performed mentally.
Examiner respectfully disagrees. The models are not an additional element and could be performed using pen and paper. Even if the claims were amended to recite machine learning models (assuming support from the specification), the machine learning models would need significant detail in order to be anything other than amounting to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h).
Applicant contends at Step 2A-Prong 2 that the claims provide application as the claims provide “work recognition is performed at a high speed…”
Examiner respectfully disagrees as improved speed is an inherent quality of applying an abstract idea to a computing device. (MPEP 2106.05f - Similarly, "claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363).
The 101 Rejection is updated and maintained below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the claim(s) is/are directed to statutory categories.
Step 2A, Prong One – The claims are found to recite limitations that set forth the abstract idea(s), namely in independent claims recite a series of steps for the abstract idea recited below.
Regarding Claim(s) 1, (additional elements bolded)
A work recognition system configured to use a plurality of work component recognition models into which information of movement, touch, hearing, position, camera, and equipment, and tools at the time of a work performed by a worker is input, the work recognition system comprising:
a plurality of sensors configured to acquire the information of movement, touch, hearing, position, camera, and equipment,
at least one processor programmed to: filter and exclude at least one of the work component recognition models based on a production instruction;
further filter and exclude at least one more of the work component recognition models in accordance with progress in the work, whereby the at least one more of the work component recognition models corresponds to work that is completed; and
recognize the work from one of the selected work component recognition models that is remaining; and
output work-specific work performance data indicative of the recognized work.
Regarding Claim(s) 6 and 7, A work recognition method// A non-transitory computer readable storage medium storing a program for causing an information processing apparatus configured to: for recognizing a work using a plurality of work component recognition models into which information of movement, touch, hearing, position, camera, and equipment, and tools at the time of a work performed by a worker is input, the method comprising
controlling a plurality of sensors to acquire the information of movement, touch, hearing, position, camera, and equipment,
filtering and excluding at least one of the work component recognition models based on a production instruction;
further filtering and excluding at least one more of the work component recognition models in accordance with progress in the work, whereby the at least one more of the work component recognition models corresponds to work that is completed; and
recognizing the work from one of the selected work component recognition models that is remaining; and
outputting work-specific work performance data indicative of the recognized work.
As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea groupings of “Mental processes—concepts performed in the human mind” (observation, evaluation, judgment, opinion) as the claims are directed towards selecting a work component recognition model, select work recognition model based on progress, and recognizing the work from selected work component recognition model all of which are concepts capable of being performed in the human mind (i.e. via pen and paper).
Further the claims are directed towards the abstract idea grouping of “Certain methods of organizing human activity” — commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as the claims are directed towards providing work support information in real time. (See Specification, [04]).
Step 2A, Prong Two - This judicial exception is not integrated into a practical application. The independent claims utilize at least a plurality of sensors configured to acquire the information of movement, touch, hearing, position, camera, and equipment; at least one processor programmed to; outputting work-specific work performance data; controlling a plurality of sensors to acquire the information of movement, touch, hearing, position, camera, and equipment, outputting work-specific work performance data; and A non-transitory computer readable medium storing a program for causing an information processing apparatus. The additional elements are performing the steps would be no more than mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h).
Step 2B - The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are just “apply it” on a computer. (See MPEP 2106.05(f) – Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h).
Regarding Claim(s) 2-5, the claim further narrows the abstract idea or recite additional elements previously rejected in the independent claims.
Accordingly, the claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nito et al. (US 20210133444 A1) in view of Yu et al. (US 20220350315 A1), and Xu et al. (US 20180150783 A1).
Regarding Claim(s) 1, Nito teaches: A work recognition system configured to use a plurality of work component recognition models into which information of movement, touch… position, camera, and equipment, and tools at the time of a work performed by a worker is input, the work recognition system comprising: (Nito, [05]; One form of a work recognition apparatus for solving the above problems acquires a reference image including a work target from an input and output unit, estimates a first relative position of the work target and a camera from the reference image, converts a two-dimensional work region for the work target included in the reference image into a three-dimensional work region using template information, stores the three-dimensional work region in a storage apparatus as work model information, estimates a second relative position of the work target and the camera in a determination image acquired by the camera and Nito, [36]; The left side of FIG. 3 shows a state in which a product 303 that is the work target is on a work table 304, and the user has designated a work region 302 for the work target 303 on a screen 301. The right side of FIG. 3 shows a state in which a product 313 that is a work target is on a work table 314, and shows a three-dimensional work region calculated from the work region 302 designated in the left side of FIG. 3, that is, a three-dimensional position 312 of the work region).
a plurality of sensors configured to acquire the information of movement, touch… position, camera, and equipment, (Nito, [02]; automatically detecting deviating operation from motion information of a worker acquired by using various sensors and Nito, [28]; camera parameters 231 managing a focal length, an aspect ratio, an optical center and the like of a camera; work target data 232 managing a shape of a work target such as a product, 3D data, product feature point position and the like; work target position data 233 storing a positional relationship (relative position) of the camera and the work target; work region data 234 storing a work region of the worker; work model information 235 storing a work position, a work feature amount and the like of each piece of work performed by the worker and Nito, [36]; It is desirable to arrange the camera 101 at a position where the worker's hand can perform capturing in each pieces of work. The camera 101 is assumed to be a commercially available network camera. Each camera has an identification information ID for identifying each camera).
at least one processor programmed to: (Nito, [22]; The image processing apparatus 104 is a computer including an input and output unit 201, a memory 202, a storage apparatus 203, and a processing unit (hereinafter, CPU) 204. The input and output unit 201, the memory 202, the storage apparatus 203, and the CPU 204 are connected via a bus 205).
further filter and exclude a least one or more the work component recognition model in accordance with progress in the work; (Nito, [34]; The work recognition unit 220 includes a work division unit 221, a work model selection unit 222, a work determination unit 223, and a work determination result output unit 224. The work division unit 221 divides motion of the worker shown in the captured moving image into pieces of work. The work model selection unit 222 determines which work the divided work corresponds to and Nito, [77]; The work model selection unit 222 selects a work model that matches each piece of work (S1105). The camera ID indicating each piece of work, the position of the hand or joint, and the reference operation model obtained in the same manner as in step S705 are compared with the determination operation model of the determination image and Nito, [81]; the work determination unit 223 collates the selected work model with the work flow to determine whether the work is correct. If there is no matching work model in step S1105, it is determined that the work is not performed correctly, and if there is a matching work model, it is determined whether the worker's work is performed correctly (S1106) and Nito, [Fig. 11, elements s1105 and s1106]; visual representation of iteratively selecting models (i.e. further filtering)). Examiner notes that selecting a model of a plurality of models would exclude the non-selected models.
whereby the at least one or more of the work component recognition models corresponds to work that is completed; (Nito, [34]; The work model selection unit 222 determines which work the divided work corresponds to. The work determination unit 223 collates the determined work with the work flow to determine whether the flow of work is correct. The work determination result output unit 224 displays the result of the determination made by the work determination unit 223 on the input and output apparatus 105 having a display device via the input and output unit 201 and Nito, [81-82]; the work determination unit 223 collates the selected work model with the work flow to determine whether the work is correct. If there is no matching work model in step S1105, it is determined that the work is not performed correctly, and if there is a matching work model, it is determined whether the worker's work is performed correctly (S1106)… As long as it is determined that the work is correct in the work determination, steps S1105 and S1106 are repeatedly performed by storing the work number in flow that has completed in each work in the progress information storage unit).
recognize the work from one of the selected work component recognition models that is remaining; (Nito, [77]; The work model selection unit 222 selects a work model that matches each piece of work (S1105). The camera ID indicating each piece of work, the position of the hand or joint, and the reference operation model obtained in the same manner as in step S705 are compared with the determination operation model of the determination image).
output work-specific work performance data indicative of the recognized work. (Nito, [28]; a work feature amount and the like of each piece of work performed by the worker; work flow information 236 storing the order of work, contents of each piece of work and the like; and work progress data 237 storing progress of work and Nito, [82]; As long as it is determined that the work is correct in the work determination, steps S1105 and S1106 are repeatedly performed by storing the work number in flow that has completed in each work in the progress information storage unit. When it is determined that all pieces of work are correct, the work recognition result is output indicating that there is no deviating operation, and when it is determined that even one work is not correct, the work recognition result is output indicating that there is deviating operation).
While Nito teaches filtering work models, products, and filtering based upon progress, Nito does not appear to explicitly teach filtering based upon instructions. However, Nito in view of Achin does teach: filter and exclude at least one or more of the work component recognition models based on a production instruction; (Yu, [05]; a data acquisition automation control system in the factory to obtain a data model representing processing steps of a product, and matching the processing steps against the process attributes of the product IoT model and determining the status of the production order in the factory on the basis of the matching result).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Nito including filtering work models, products, and filtering based upon progress with the teachings of Yu including filter and exclude at least one or more of the work component recognition models based on a production instruction in order to find the best product matching result for further analysis (Yu, [11]; the product IoT model also comprising at least process attributes of product processing, an IoT model association unit (206), configured to associate a production IoT model with a product IoT model having the same process attributes, a data model acquisition unit (208), configured to learn data of a production device acquired by a data acquisition automation control system in the factory to obtain a data model representing processing steps of a product, and an order status determination unit (210), configured to match the processing steps against the process attributes of the product IoT model and determine the status of the production order in the factory on the basis of the matching result).
While Nito teaches a system/apparatus/method for work recognition including position, movement, camera, etc. Nito does not appear to explicitly teach the using of hearing. However, Nito in view of the analogous art of Xu (i.e. work modeling) does teach the entirety of the limitation. (Xu, [71]; IO devices 1507 may include an audio device. An audio device may include a speaker and/or a microphone to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and/or telephony functions. Other IO devices 1507 may further include universal serial bus (USB) port(s), parallel port(s), serial port(s), a printer, a network interface, a bus bridge (e.g., a PCI-PCI bridge), sensor(s) (e.g., a motion sensor such as an accelerometer, gyroscope, a magnetometer, a light sensor, compass, a proximity sensor, etc.), or a combination thereof. Devices 1507 may further include an imaging processing subsystem (e.g., a camera), which may include an optical sensor, such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, utilized to facilitate camera functions, such as recording photographs and video clips).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Nito including a system/apparatus/method for work recognition including position, movement, camera, etc. with the teachings of Xu including the use of hearing in order to provide a system that allows for a plurality of input sensors to fit a user needs (Xu, [71]; IO devices 1507 may include an audio device. An audio device may include a speaker and/or a microphone to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and/or telephony functions. Other IO devices 1507 may further include universal serial bus (USB) port(s), parallel port(s), serial port(s), a printer, a network interface, a bus bridge (e.g., a PCI-PCI bridge), sensor(s) (e.g., a motion sensor such as an accelerometer, gyroscope, a magnetometer, a light sensor, compass, a proximity sensor, etc.), or a combination thereof. Devices 1507 may further include an imaging processing subsystem (e.g., a camera), which may include an optical sensor, such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, utilized to facilitate camera functions, such as recording photographs and video clips. Certain sensors may be coupled to interconnect 1510 via a sensor hub (not shown), while other devices such as a keyboard or thermal sensor may be controlled by an embedded controller (not shown), dependent upon the specific configuration or design of system 1500).
Regarding Claim(s) 2, Nito/Yu/Xu teaches: The work recognition system according to claim 1, wherein the work component recognition model is narrowed down to a work of one product by the at least one processor. (Nito, [28]; The storage apparatus 203 stores the various types of information 230. The various types of information 230 include information including: camera parameters 231 managing a focal length, an aspect ratio, an optical center and the like of a camera; work target data 232 managing a shape of a work target such as a product, 3D data, product feature point position and the like; work target position data 233 storing a positional relationship (relative position) of the camera and the work target and Nito, [36]; The left side of FIG. 3 shows a state in which a product 303 that is the work target is on a work table 304, and the user has designated a work region 302 for the work target 303 on a screen 301. The right side of FIG. 3 shows a state in which a product 313 that is a work target is on a work table 314, and shows a three-dimensional work region calculated from the work region 302 designated in the left side of FIG. 3, that is, a three-dimensional position 312 of the work region. The work table 304 and the work table 314 are the same work table, and the product 303 and the product 313 are the same product and Nito, [75]; As described above, the reference image including the work target is acquired, the first relative position of the work target and the camera is estimated from the reference image, the two-dimensional work region for the work target included in the reference image is converted into a three-dimensional work using the template information, and stored in the storage apparatus as work model information).
Regarding Claim(s) 3, Nito/Yu/Xu teaches: The work recognition system according to claim 1, wherein the work component recognition model for the completed work is excluded by the at least one processor; (Nito, [65]; In the example of FIG. 10B, the work flow is composed of four pieces of work, and for the work number in flow “1”, determination is made by the image of the camera ID “1”, the work ID is “1”, there is no premised work, and work can be performed from the beginning. For the work number in flow “2”, determination is made by the image of the camera ID “1”, the work ID is “2”, the premised work number in flow is “1”, and work can be performed after completion of the work number in flow “1”. This is similar for the work number in flow “3” and “4”, and designation of two work number in flow “2” and “3” for the work number in flow “4” indicates that the work can be performed after both the work number in flow “2” and “3” are completed). Examiner notes that the models are selected prior to completion, once the steps is completed the next step commences and as such the completed work is excluded (i.e. not selected).
Regarding Claim(s) 4, Nito/Yu/Xu teaches: The work recognition system according to claim 1, wherein it is determined whether or not the work can be recognized after the selection is made by the at least one processor, (Nito, [81]; the work determination unit 223 collates the selected work model with the work flow to determine whether the work is correct. If there is no matching work model in step S1105, it is determined that the work is not performed correctly, and if there is a matching work model, it is determined whether the worker's work is performed correctly (S1106).
when the work is not recognized, the at least one processor does not filter and excluded, the work is recognized, a work recognition result is output, and the selection is made by the at least one processor, and (Nito, [78-80]; the degree of matching can be calculated by calculating the probability that divided pieces of analysis information occur. Even when the operation model is represented by a hidden Markov model, the probability that the divided pieces of analysis information can occur can be calculated, and the degree of matching can be calculated… The work model selection unit 222 selects a work model in which the degree of matching of the positions and the operation model is equal to or greater than a threshold and is the highest… If there is no work model that is equal to or greater than the threshold, determination is made that there is no matching work mode).
when the work is recognized, a work recognition result is output, and the selection is made by the at least one processor (Nito, [81-82]; the work determination unit 223 collates the selected work model with the work flow to determine whether the work is correct. If there is no matching work model in step S1105, it is determined that the work is not performed correctly, and if there is a matching work model, it is determined whether the worker's work is performed correctly (S1106). In this determination, one determination criterion is whether the worker is working in the pixel coordinates 1005 of the region of the work model information 235 in the determination image. In addition, whether the work is performed correctly may be determined by comparing the reference operation model and the operation model on the determination screen. For the entire work, collation is performed with the work flow information 236 and work progress information, and if the work is possible at that time, it is determined that the work is performed correctly, and if it is other work, it is determined that the work is not performed correctly… As long as it is determined that the work is correct in the work determination, steps S1105 and S1106 are repeatedly performed by storing the work number in flow that has completed in each work in the progress information storage unit.).
Regarding Claim(s) 5, Nito/Yu/Xu teaches: The work recognition system according to claim 1, comprising a work recognition model library that stores a work component recognition model referred to by the at least one processor. (Nito, [34]; The work recognition unit 220 includes a work division unit 221, a work model selection unit 222, a work determination unit 223, and a work determination result output unit 224. The work division unit 221 divides motion of the worker shown in the captured moving image into pieces of work. The work model selection unit 222 determines which work the divided work corresponds to. The work determination unit 223 collates the determined work with the work flow to determine whether the flow of work is correct. The work determination result output unit 224 displays the result of the determination made by the work determination unit 223 on the input and output apparatus 105 having a display device via the input and output unit 201 and Nito, [45]; A plurality of calibration patterns such as checker patterns and dot patterns are captured by the camera 101, and the captured images are acquired and stored by the recording apparatus 103 via the network 102. Then, the input and output unit 201 reads the corresponding image from the recording apparatus 103 and stores the corresponding image in the memory 202). Examiner interprets the images in memory as the library
Regarding Claim(s) 6 and 7, Nito teaches: A work recognition method/A non-transitory computer readable medium storing a program for causing an information processing apparatus configured to recognize a work using a plurality of work component recognition models into which information of movement, touch, hearing, position, camera, and equipment, and tools at the time of a work performed by a worker is input to execute: (Nito, [25]; The storage apparatus 203 includes HDD, SSD or the like which is a non-volatile storage medium, and stores a position estimation program, a work region estimation program, a work flow creation program, and a work recognition program. These programs are stored in the memory 202 and are executed by the CPU 204 to achieve various functions. In the description below, for easy understanding of the description, functions achieved by the CPU 204 executing the position estimation program and Nito, [05]; One form of a work recognition apparatus for solving the above problems acquires a reference image including a work target from an input and output unit, estimates a first relative position of the work target and a camera from the reference image, converts a two-dimensional work region for the work target included in the reference image into a three-dimensional work region using template information, stores the three-dimensional work region in a storage apparatus as work model information, estimates a second relative position of the work target and the camera in a determination image acquired by the camera and Nito, [36]; The left side of FIG. 3 shows a state in which a product 303 that is the work target is on a work table 304, and the user has designated a work region 302 for the work target 303 on a screen 301. The right side of FIG. 3 shows a state in which a product 313 that is a work target is on a work table 314, and shows a three-dimensional work region calculated from the work region 302 designated in the left side of FIG. 3, that is, a three-dimensional position 312 of the work region).
controlling a plurality of sensors to acquire the information of movement, touch, hearing, position, camera, and equipment; (Nito, [02]; automatically detecting deviating operation from motion information of a worker acquired by using various sensors and Nito, [28]; camera parameters 231 managing a focal length, an aspect ratio, an optical center and the like of a camera; work target data 232 managing a shape of a work target such as a product, 3D data, product feature point position and the like; work target position data 233 storing a positional relationship (relative position) of the camera and the work target; work region data 234 storing a work region of the worker; work model information 235 storing a work position, a work feature amount and the like of each piece of work performed by the worker and Nito, [36]; It is desirable to arrange the camera 101 at a position where the worker's hand can perform capturing in each pieces of work. The camera 101 is assumed to be a commercially available network camera. Each camera has an identification information ID for identifying each camera).
further filtering and excluding at least one more of the work component recognition model in accordance with progress in the work; and ((Nito, [34]; The work recognition unit 220 includes a work division unit 221, a work model selection unit 222, a work determination unit 223, and a work determination result output unit 224. The work division unit 221 divides motion of the worker shown in the captured moving image into pieces of work. The work model selection unit 222 determines which work the divided work corresponds to and Nito, [77]; The work model selection unit 222 selects a work model that matches each piece of work (S1105). The camera ID indicating each piece of work, the position of the hand or joint, and the reference operation model obtained in the same manner as in step S705 are compared with the determination operation model of the determination image and Nito, [81]; the work determination unit 223 collates the selected work model with the work flow to determine whether the work is correct. If there is no matching work model in step S1105, it is determined that the work is not performed correctly, and if there is a matching work model, it is determined whether the worker's work is performed correctly (S1106) and Nito, [Fig. 11, elements s1105 and s1106]; visual representation of iteratively selecting models (i.e. further filtering)). Examiner notes that selecting a model of a plurality of models would exclude the non-selected models.
whereby the at least one or more of the work component recognition models corresponds to work that is completed; (Nito, [34]; The work model selection unit 222 determines which work the divided work corresponds to. The work determination unit 223 collates the determined work with the work flow to determine whether the flow of work is correct. The work determination result output unit 224 displays the result of the determination made by the work determination unit 223 on the input and output apparatus 105 having a display device via the input and output unit 201 and Nito, [81-82]; the work determination unit 223 collates the selected work model with the work flow to determine whether the work is correct. If there is no matching work model in step S1105, it is determined that the work is not performed correctly, and if there is a matching work model, it is determined whether the worker's work is performed correctly (S1106)… As long as it is determined that the work is correct in the work determination, steps S1105 and S1106 are repeatedly performed by storing the work number in flow that has completed in each work in the progress information storage unit).
recognizing the work from one of the selected work component recognition models that is remaining; and (Nito, [77]; The work model selection unit 222 selects a work model that matches each piece of work (S1105). The camera ID indicating each piece of work, the position of the hand or joint, and the reference operation model obtained in the same manner as in step S705 are compared with the determination operation model of the determination image).
outputting work-specific work performance data indicative of the recognized work. (Nito, [28]; a work feature amount and the like of each piece of work performed by the worker; work flow information 236 storing the order of work, contents of each piece of work and the like; and work progress data 237 storing progress of work and Nito, [82]; As long as it is determined that the work is correct in the work determination, steps S1105 and S1106 are repeatedly performed by storing the work number in flow that has completed in each work in the progress information storage unit. When it is determined that all pieces of work are correct, the work recognition result is output indicating that there is no deviating operation, and when it is determined that even one work is not correct, the work recognition result is output indicating that there is deviating operation).
While Nito teaches filtering work models, products, and filtering based upon progress, Nito does not appear to explicitly teach filtering based upon instructions. However, Nito in view of the analogous art of Yu (i.e. data analytics) does teach: filtering and excluding at least one the work component recognition model based on a production instruction; ((Yu, [05]; a data acquisition automation control system in the factory to obtain a data model representing processing steps of a product, and matching the processing steps against the process attributes of the product IoT model and determining the status of the production order in the factory on the basis of the matching result).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Nito including filtering work models, products, and filtering based upon progress with the teachings of Yu including filter and exclude at least one or more of the work component recognition models based on a production instruction in order to find the best product matching result for further analysis (Yu, [05]; a data acquisition automation control system in the factory to obtain a data model representing processing steps of a product, and matching the processing steps against the process attributes of the product IoT model and determining the status of the production order in the factory on the basis of the matching result).
While Nito teaches a system/apparatus/method for work recognition including position, movement, camera, etc. Nito does not appear to explicitly teach the using of hearing. However, Nito in view of the analogous art of Xu (i.e. work modeling) does teach the entirety of the limitation. (Xu, [71]; IO devices 1507 may include an audio device. An audio device may include a speaker and/or a microphone to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and/or telephony functions. Other IO devices 1507 may further include universal serial bus (USB) port(s), parallel port(s), serial port(s), a printer, a network interface, a bus bridge (e.g., a PCI-PCI bridge), sensor(s) (e.g., a motion sensor such as an accelerometer, gyroscope, a magnetometer, a light sensor, compass, a proximity sensor, etc.), or a combination thereof. Devices 1507 may further include an imaging processing subsystem (e.g., a camera), which may include an optical sensor, such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, utilized to facilitate camera functions, such as recording photographs and video clips).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Nito including a system/apparatus/method for work recognition including position, movement, camera, etc. with the teachings of Xu including the use of hearing in order to provide a system that allows for a plurality of input sensors to fit a user needs (Xu, [71]; IO devices 1507 may include an audio device. An audio device may include a speaker and/or a microphone to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and/or telephony functions. Other IO devices 1507 may further include universal serial bus (USB) port(s), parallel port(s), serial port(s), a printer, a network interface, a bus bridge (e.g., a PCI-PCI bridge), sensor(s) (e.g., a motion sensor such as an accelerometer, gyroscope, a magnetometer, a light sensor, compass, a proximity sensor, etc.), or a combination thereof. Devices 1507 may further include an imaging processing subsystem (e.g., a camera), which may include an optical sensor, such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, utilized to facilitate camera functions, such as recording photographs and video clips. Certain sensors may be coupled to interconnect 1510 via a sensor hub (not shown), while other devices such as a keyboard or thermal sensor may be controlled by an embedded controller (not shown), dependent upon the specific configuration or design of system 1500).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L GUNN whose telephone number is (571)270-1728. The examiner can normally be reached Monday - Friday 6:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEREMY L GUNN/ Examiner, Art Unit 3624