Prosecution Insights
Last updated: April 19, 2026
Application No. 18/527,456

WORK SIMULATION SYSTEM, WORK SIMULATION METHOD, AND NON-TRANSITORY STORAGE MEDIUM

Non-Final OA §103
Filed
Dec 04, 2023
Examiner
GOLDBERG, IVAN R
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
3 (Non-Final)
35%
Grant Probability
At Risk
3-4
OA Rounds
4y 8m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
128 granted / 365 resolved
-16.9% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
57 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
27.7%
-12.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 365 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/21/25 has been entered. Notice to Applicant The following is a Non-Final Office action. In response to Examiner’s Final Rejection of 9/22/25, Applicant, on 11/21/25, amended claims. Claims 1-4, 6-10, 12-16, 18, and 20 are pending in this application and have been rejected below. Response to Amendment The objection to claim 6 is withdrawn by the amendment. The 112b rejection for claim 6 is withdrawn by the amendment. The “display unit” is removed from the claim interpretation sections based on the amendment and as pointed out by Applicant in the Remarks (Remarks, 11/21/25, page 8). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “acquisition unit, determination unit, output unit” in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. [0044] as published states “ the functional blocks of the work simulation system 10 illustrated in FIG. 1, namely the acquisition unit 11, the determination unit 12, and the output unit 13, can be constituted by the CPU, the storage unit, and other circuits in term of hardware. Alternatively, the functional blocks can be implemented by the programs etc. stored in the storage unit in terms of software.” [0104 as published states “Such a program can be stored using various types of non-transitory computer-readable media, and supplied to a computer. The non-transitory computer-readable media include various types of tangible storage media.” Based on FIG. 1 and [0044, 0104], and the claim reciting “processor configured to function as”, Examiner interprets each of “acquisition unit, determination unit, output unit” as referring to the structure of a computer, storing code/instructions, and executing the different functional limitations for each “unit.” If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Reasons for Subject Matter Eligibility under 35 USC 101 The claim 1 overcomes the 101 rejections because the claim is now : 1) a hardware processor configured to function as, an acquisition unit configured to acquire information that indicates a position of a work portion of the work target object and information that indicates an approaching direction in which the virtual character approaches the work portion; 2) a determination unit configured to determine a posture that is suitable for work to be executed on the work portion by the virtual character facing the approaching direction; 3) a display configured to receive an output posture suitable for the simulation work to be executed by the virtual character facing the approaching direction, displaying a simulation state showing at least the position of the work portion and the posture; and 4) the display is configured to receive an input from a user to select a specific work portion and based on the input, switch to display a simulation state showing at least the position of the specific work portion and work portion information relating to the specific work portion. When viewing the claim as a whole, this when combined with the earlier limitations is viewed as not being “directed to an abstract idea.” In addition, it is a practical application under step 2a, prong 2, as the claim is improving another technology when viewing all the limitations listed above (See MPEP 2106.05a) and/or is viewed as a using a judicial exception in a meaningful way under MPEP 2106.05(e). The same reasons also apply to independent claims 7 and 13 which have similar limitations, and to the dependent claims for the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 6-10, 12-16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Caputo, et al., “Workplace design ergonomic validation based on multiple human factors assessment methods and simulation,” 2019, Production & Manufacturing Research, 7:1, pages 195-222, in view of Meess (US 2018/0130376) and Leu, “CAD model based virtual assembly simulation, planning and training”, 2013, CIRP Annals - Manufacturing Technology, Vol. 62, pages 799-822. Concerning claim 1, Caputo discloses: A work simulation system configured to output a state in which a virtual character in a virtual space executes predetermined work on a work target object (Caputo – page 202-203 – set the virtual scenario by using a Virtual Reality simulation software that allows simulating the production processes, as shown in Figure 3. For completing this step, it is necessary to have the CAD files of the products and the resources, the SOPs and a database of Digital Human Models. Once the WP is set, it is possible to use the DHM, customised according to the desired anthropometric measures, to simulate the operating tasks. The most complex DHMs are cinematised with realistic biomechanical properties, composed by a high number of segments connected by joints made up of all d.o.f. (degrees of freedom) corresponding to the real human articulations. See page 206, Section 3.1 - From the simulation, it can be extracted a high amount of data which allow to perform a detailed analysis of working postures), comprising: a hardware processor configured to function (Caputo – See page 196, last paragraph – page 197, 1st paragraph - Virtual ergonomics approach allows overcoming these troubles creating a virtual model of the plant that contains virtual models of products and related components, as shown in the following sections. In this virtual scenario, a Digital Human Model (DHM) is able to reproduce operating tasks of each workstation dynamically and a computational evaluation of indexes is carried out, making possible a human-centred design of the workplace; see page 211, Section 3.2 - The ‘Force Solver’ tool, integrated in Tecnomatix Process Simulate software, enables to analyse the maximum force that a human model can exert in a posture. Using the ‘Force solver’ tools of Tecnomatix Process Simulate® software, it is possible evaluating the maximum force that the worker can exert during the screwing activities; see FIG. 12 – screenshot of a GUI). Caputo does not explicitly say “computer” though it has many screenshots and disclosure of software simulation. Meess discloses the processor explicitly: a “hardware processor” configured to function (Meess par 64, FIG. 13 - System 400 includes a logic processor-based subsystem 410, which may be programmable and operable to execute coded instructions for generating the visual cues and audio cues to be overlaid and aligned with a digital image of the welding environment 480 and/or displayed on a combiner of a HUD. ) Caputo and Meess discloses: an acquisition unit configured to acquire information that indicates a position of a work portion of the work target object (Caputo – see page 204, last paragraph - According to the SOP, provided by FCA, the activities can be schematically described as shown in Table 3. The working time of each sub-task is compliant with MTM (Method Time Measurements) and its evolutions, the most known and applied methods for predicting working time in a typical batch production system. By dividing the task in micro movements (reach, grasp, position, release, move and so on), it allows determining times; see page 204, 5th paragraph - The distances between the equipment are described in Figure 5; see page 213, 3rd paragraph - Geometrical (distances) and physical (durations) data can be extracted from the simulation and, knowing the physical properties of the floor, it is possible to evaluate the initial and the sustained forces (in N) exerted by the worker by the Equations 5 and 6, neglecting the friction forces between bearing and wheel hub and between pivot and wheel; see also Leu page 801, col. 1, 1st paragraph - Key to these enabling technologies is the use of CAD model based simulation, including computer graphics, VR, and augmented reality (AR) as the basis for developing advanced tools (software and hardware) and systems for assembly planning and training; see page 807, Section 4 - in virtual assembly, generating accurate positions and orientations to update virtual parts is important for generating realistic assembly simulations; see page 807, section 4.1 - When the user operates objects in the VE and moves related objects close to each other, the potential geometric constraints can be captured. The precise position and orientation of each of these objects can be calculated with a constraint solver, and the constraint-based motion can be simulated. Constraint-based modeling can be based on either positional or geometric constraints; See also Meess– see par 78 - frame-by-frame point cloud analysis of the rigid bodies (i.e., the calibrated targets, which can include welding tool 460 and welding helmet 440) that includes three or more point markers. Upon recognition of a known rigid body, position and orientation are calculated relative to the camera origin and the “trained” rigid body orientation. Calibrating and “training” the spatial tracker 420 to recognize the position and orientation in three-dimensional space of rigid bodies such as the welding tool 460 and welding helmet 440 is known in the relevant art) and information that indicates an approaching direction in which the virtual character approaches the work portion (Applicant’s [0036] as published - The acquisition unit 11 acquires information that indicates the position of a work portion of a work target object and information that indicates an approaching direction in which the virtual character approaches the work portion. The work target object is a finished product, a part, a structure, or a facility, for example. The work portion is a portion of the work target object on which the virtual character executes work and which has a predetermined area. Caputo – see page 207 - The OWAS evaluation tool from Tecnomatix Process Simulate software enables to analyse operations according to predefined joint values. Running the simulation, it is easy to evaluate the different postures according to the OWAS method; see page 210, FIG. 10 – showing an approaching direction for posture of virtual character evaluated by OWAS PNG media_image1.png 322 668 media_image1.png Greyscale See also Meess – see par 106-107 - Preferred embodiments can also generate and display other virtual objects that can aid or train the user. For example, as seen in FIG. 22, virtual object 908 represents an obstruction that welder may encounter in a real-world welding scenario. The virtual obstruction object 908 can be generated and displayed on the display 441 next to the weld joint 480C to simulate an object that will prevent the user from … properly positioning the welding tool 460 as desired in a real-word scenario. In some embodiments, visual and audio cues can be used to aid the user in navigating the obstruction. For example, visual cues related to the weld tool position, orientation and motion, e.g., CTWD, travel speed, work angle, travel angle and aim, can be, e.g., displayed on a fixed location on the display 441 and/or “attached” to the welding tool 480, as discussed above, to help the user in navigating the obstruction; the logic processor-based subsystem 410 and/or another computer includes a database of predefined obstruction categories such as e.g., an overhanging obstruction over the weld joint, an obstruction between the user and the weld joint, and obstruction that is very close to the weld joint, and/or another type of obstruction, which can be uploaded, by e.g., the user and/or the instructor, into welding training sessions); a determination unit configured to determine a posture that is suitable for the simulation work to be executed on the work portion by the virtual character facing the approaching direction (Caputo see page 196, 2nd to last paragraph - Evaluating ergonomic indexes means evaluating several physical parameters (joint angles, force, pressures, etc.), that require the use of many tools (i.e. motion capture systems, dynamometers, electromyography, cyber-gloves) for a proper design of the workplace; see page 206, Section 3.1.1, 1st paragraph - From the simulation, it can be extracted a high amount of data which allow to perform a detailed analysis of working postures. The most important data are those ones regarding posture angles, for which it is possible to plot the trends over the time for each one of the 71 segments of the virtual mannequin; last paragraph - allows to identify, within a production cycle, the operations and/or phases potentially dangerous for the musculoskeletal system, quantifying the level of risk; page 206, section 3.1.1, 3rd paragraph - From the posture angles trends, the values and the durations they reach, it is possible to note that the assumed postures do not seem significantly onerous for the biomechanical overload due to working postures since, according to the Standard ISO 11226, there are not static awkward working postures (a posture is static if hold for at least four consecutive seconds) of trunk and upper limbs. see page 210, FIG. 10 - Postures assumed during screwings evaluated by OWAS. see page 211, section 3.2, 1st paragraph - The ‘Force Solver’ tool, integrated in Tecnomatix Process Simulate software, enables to analyse the maximum force that a human model can exert in a posture. It allows specifying the posture and all input parameters. The analysis provides the maximum allowable force along a specified direction. 4th paragraph - The evaluation shall be performed for the counter-reaction forces of tightening end that the worker needs to exert in a certain direction that depend on the geometry of the resource, the tightening torque and the assumed posture; see also Leu – see page 818, section 8.2.3 – The fastening operation predominantly involves the upper body of the operator, so RULA is a useful tool for ergonomic analysis. RULA has been developed for use in ergonomic investigation of workplaces [120], and is especially useful for scenarios in which work-related upper limb disorders are reported. RULA uses a scoring system based on posture); and an output unit configured to output the posture determined by the determination unit (Caputo – see page 216, Table 9 – Evaluation of awkward posture score for right and left limbs; see FIG. 10a as example); and PNG media_image2.png 294 552 media_image2.png Greyscale a display configured to receive an output from the output unit (Caputo – see page 202, FIG. 3 – Virtual scenario setting using Digital Human Models (DHM), parts, and tools/resources, then Simulation, then Data analysis, which includes animation of simulation; see also FIG. 10a-10b) see also Leu - see page 812, col. 2, 1st paragraph - Based on the operator’s experience, different assembly information can be retrieved to guide the assembly operation. Image-based instructions indicating the assembly operations can be stored in the assembly instruction database. At the same time, other means, such as video clips and graphical primitives in the form of short labels, text, and arrows that help the operator understand how to execute the assembly operations, also can be stored. see FIG. 17; see page 818, section 8.2.3 – The fastening operation predominantly involves the upper body of the operator, so RULA is a useful tool for ergonomic analysis. RULA uses a scoring system based on posture; The RULA analysis can be used to determine the risk levels associated with particular postures and to suggest actions needed in order to reduce the risk of long-term ergonomic injuries and to design safer workplaces. see page 800, col. 1, 4th paragraph - With motion capture and 3D visualization capabilities, interactions among products, processes and human operators can be analyzed and evaluated to identify potential problems during assembly, such as awkward postures, poor workcell layout, insufficient tools and fixtures and inability to access parts; see also Meess see par 65 - the display device 440A can also playback or display a variety of media content such as videos, documents (e.g., in PDF format or another format), audio, graphics, text, or any other media content that can be displayed or played back on a computer. This media content can include, for example, instructional information), the display is configured to display each of a work portion list including the work portion (See FIG. 6, [0100 as published - As illustrated in FIG. 6, a display unit 14 displays work portion list display DS1, simulation state display DS2, and work portion information display DS3. By selecting one of the work portions P4 and P5 in the work portion list display DS1 displayed on the display unit 14, the worker can switch to display the simulation state display DS2 and the work portion information display DS3 at the selected work portion. Caputo discloses the limitations based on broadest reasonable interpretation in light of the specification – see page 204, last paragraph - According to the SOP (Standard Operating Procedures), provided by FCA (Fiat Chrysler Automobiles), the activities can be schematically described as shown in Table 3. The page 205, last paragraph – Table 5 shows the simulation frames corresponding to the sub-tasks describe in Table 3 on page 206 (e.g. sub-tasks 1-19); page 208 shows Table 5 and displays the simulation frames see also Leu see page 812, section 6.3 – AR can be applied to provide useful, relevant assembly instructions in the real environment in the operator’s field of view so that he/she does not need to exert additional body movements to retrieve instructions. see page 812, col. 2, 1st paragraph - Based on the operator’s experience, different assembly information can be retrieved to guide the assembly operation. Image-based instructions indicating the assembly operations can be stored in the assembly instruction database. At the same time, other means, such as video clips and graphical primitives in the form of short labels, text, and arrows that help the operator understand how to execute the assembly operations, also can be stored. see FIG. 17 see also Meess see par 65 - the display device 440A can also playback or display a variety of media content …can include, for example, instructional information on welding that the trainee can review prior to performing a weld (e.g., general information on welding, information on the specific weld joint or weld procedure the user wishes to perform, etc; ), a simulation state showing at least the position of the work portion (Caputo see page 208 shows Table 5 and displays the simulation frames, showing many different work portions in 19 sub-tasks PNG media_image3.png 354 590 media_image3.png Greyscale see also Leu – page 812, Col. 2 - an AR-assisted assembly system that incorporates Virtual Interaction Panels (VirIPs) to directly acquire a relevant understanding of the surrounding assembly scene from the human assembler’s perspective. Their approach uses a visual assembly tree structure (VATS) to manage the assembly information and retrieve the relevant instructions for the assembly operators in the AR environment. It can be integrated directly into an AR system or reside on a remote computer as a central control station to control the assembly information data flow during the entire assembly process. Based on the operator’s experience, different assembly information can be retrieved to guide the assembly operation. Image-based instructions indicating the assembly operations can be stored in the assembly instruction database. At the same time, other means, such as video clips and graphical primitives in the form of short labels, text, and arrows that help the operator understand how to execute the assembly operations, also can be stored see also Mees - See FIG. 22, par 106- the logic processor-based subsystem 410 can generate virtual weld objects 904 (multiple welds to perform) that visually show where the user should place the welds on a coupon/workpiece 480A, B) and the posture determined by the determination unit (Caputo – see page 208, Table – simulation frames for sub-tasks of Table 4, for “rear sound adsorbing panels assembly”; see page 215, last paragraph – awkward postures (table 9) evaluated by analyzing trends over time of shoulder, elbow, and wrist postural angles, as well as working postures for the whole body; see page 218-219, section 3.5 – validating design for indexes being “low-risk” and “ergonomically safe”; Once the iteration is completed and, after eventual design changes, the design has been numerically validated, for definitively validating the workplace design, according to Digital Manufacturing strategy, it is possible to perform a rapid physical simulation in which a worker reproduces the working task in a laboratory and the ergonomists can assess the ergonomic indexes experimentally; See also Meess – see par 62 - the computer system 160 and/or the welding system 14 can be configured with different “views” or image screens that the welder can select. For example, after a welder e.g., logging into the computer system 160 or welding system 14, … computer system 160 and/or the welding system 14 can display a set of “views” that are specific to the welder, e.g., based on the welder's preferences, experience level, etc; see par 64 - System 400 includes … generating the visual cues and audio cues to be overlaid and aligned with a digital image of the welding environment 480 and/or displayed on a combiner of a HUD. See par 106 - the logic processor-based subsystem 410 can generate virtual weld objects 904 that visually show where the user should place the welds on a coupon/workpiece 480A, B – FIG. 22 shows two different welds 904. see par 107 - as seen in FIG. 22, virtual object 908 represents an obstruction that welder may encounter in a real-world welding scenario. The virtual obstruction object 908 can be generated and displayed on the display 441 next to the weld joint 480C (See FIG. 18) to simulate an object that will prevent the user from … properly positioning the welding tool 460 as desired in a real-word scenario. In some embodiments, visual and audio cues can be used to aid the user in navigating the obstruction. For example, visual cues related to the weld tool position, orientation and motion, e.g., CTWD, travel speed, work angle, travel angle and aim, can be, e.g., displayed on a fixed location on the display 441 and/or “attached” to the welding tool 480, as discussed above, to help the user in navigating the obstruction); and and work portion information (Applicant’s FIG. 6 gives examples of “work information” as any information – e.g. “Work time: 30 minutes”, “work Operation: Valve welding”) Caputo – see page 204, last paragraph - working time of each sub-task is compliant with MTM (Method Time Measurements) and its evolutions, the most known and applied methods for predicting working time in a typical batch production system. By dividing the task in micro movements (reach, grasp, position, release, move and so on), it allows determining times; see page 212-213, FIG. 14, showing “task number 14”; FIG. 12 shows force of using screwdriver for task see also Meess –see FIG. 18, par 85 – visual cues include “arc length: xxx; heat input: xxx” on display 441 in 721; see par 86 - For example, as seen in FIG. 18, visual cues 700 for CTWD 704, work angle 706 and travel speed 708 are overlaid on the welding tool 460. As the welding tool 460 moves, the visual cues CTWD 704, work angle 706 and travel speed 708 will move with the welding tool 460 so as to maintain the same relative position to the welding tool 460. In contrast, as discussed above, other visual cues can be fixed to a location on the display 441. For example, visual cues such as welding voltage 712, welding current 714 and wire feed speed 716 are fixedly mapped to a corner of display 441 in a window 702 (disclosing “work information”) and visual cues 720 are fixedly mapped to a position on the display 441; see FIG. 22, par 106 - in addition, although virtual welding objects 900 (e.g., virtual weld object 904) can be used by themselves, the logic processor-based subsystem 410 can also generate one or more visual cues 700 and audio cues, as discussed above, to aid the user in perform the weld. For example, visual cues related to the weld tool position, orientation and motion, e.g., CTWD, travel speed, work angle, travel angle and aim, can be, e.g., displayed on a fixed location on the display 441 and/or “attached” to the welding tool 480 to help the user in navigating the weld path); Caputo discloses creating virtual scenarios (See page 203, last paragraph) and having SOP (Standard operating procedures) that are simulated (See page 204, last paragraph; Table 5; Fig. 10). Meess and Leu disclose: wherein the display is configured to receive an input from a user to select a specific work portion of a plurality of work portions and to, based upon the input (Meess – see par 71 - embodiments of the invention can be used in actual working environments and the visual and audio cues, which are discussed further below, aid the welder in performing the weld. For example, a beginner can use the visual and audio cues to make sure the welding gun is oriented properly and the travel speed is correct. see par 74 - the face-mounted display 440A can display menu items for configuration and operation of system 400. Preferably, selection of the menu items can occur based on controls mounted on the welding tool 460 (e.g., buttons, switches, knobs, etc.) Preferably, the selection of the menu items can be done by tracking the eyes of the user. As the user's eyes focus on a menu item, the menu item is highlighted; see FIG. 20, par 98 - For example, as seen in FIG. 20, on screen 810, the user can select a welding procedure to perform during a welding training session. Preferably, the welding procedure determines the type of welding mode (e.g., GMAW, FCAW, SMAW, GTAW), the type of welding coupon (e.g., pipe, plate, etc.), the orientation of the weld joint (vertical, horizontal), etc. Preferably, based on the selection, default settings for the appropriate welding equipment such as welding power supply 450, wire feeder, hot-wire power supply, etc. are determined and automatically populated and presented to the user for review on screen 800 (see FIG. 19)). Caputo discloses having Standard Operating Procedures with step-by-step instructions, depending on the component to be assembled, for assisting the workers in carrying out the working activity properly (See page 201, last paragraph – page 202, 1st paragraph). Mees discloses selection being used to present information to a user, or for viewing a selected welding procedure (See par 98). Leu discloses: switch to display a simulation state showing at least the position of the specific work portion and work portion information relating to the specific work portion ([0100] as published states “By selecting one of the work portions P4 and P5 in the work portion list display DS1 displayed on the display unit 14, the worker can switch to display the simulation state display DS2 and the work portion information display DS3 at the selected work portion. The worker can also select one of the work portions P4 and P5 displayed in the simulation state display DS2.” Leu – page 812, Col. 2 - an AR-assisted assembly system that incorporates Virtual Interaction Panels (VirIPs) to directly acquire a relevant understanding of the surrounding assembly scene from the human assembler’s perspective. Their approach uses a visual assembly tree structure (VATS) to manage the assembly information and retrieve the relevant instructions for the assembly operators in the AR environment. VATS is a hierarchical tree structure that can be maintained easily via a visual interface. It can be integrated directly into an AR system or reside on a remote computer as a central control station to control the assembly information data flow during the entire assembly process. Based on the operator’s experience, different assembly information can be retrieved to guide the assembly operation. Image-based instructions indicating the assembly operations can be stored in the assembly instruction database. At the same time, other means, such as video clips and graphical primitives in the form of short labels, text, and arrows that help the operator understand how to execute the assembly operations, also can be stored). Caputo may not disclose the input to switch to different states, but Caputo, Meess, and Leu disclose the new limitations: wherein the display is configured to: display, for a movable work portion of the work target object, a movement locus of the movable work portion in the simulation state (Caputo – see page 206, Table 3- sub-task 6 place panels on the car floor; sub-task 7 “pick screwdriver #1 and connect rear window ground”; sub-task 10 “place audio unit on the car floor”; sub-task 11 “insert screws using manual screwdriver”; sub-task 14 “place audio-unit and perform two screwings”; sub-task 17-18 – pick and place audio absorbing panels; see page 208 simulation frames – 10-11 “insert screws”; a14 and other corresponding frames), and PNG media_image4.png 284 256 media_image4.png Greyscale PNG media_image5.png 294 232 media_image5.png Greyscale PNG media_image6.png 302 230 media_image6.png Greyscale receive an input from a user to trace the movement locus on the display of the simulation state ([0101] as published – “while the work portions P4 and P5 are fixed portions in the simulation state display DS2 illustrated in FIG. 6, the work portions P4 and P5 may be movable portions. That is, when the work portions have movement loci, the worker may provide a work instruction by tracing, in the simulation state display DS2, a state in which a virtual character U3 executes work while moving.” [0102] as published “In this manner, a simulation result for a selected work portion can be displayed by selecting one of the work portions P4 and P5 on the display unit 14. Since the worker can grasp the coordinates of the work portion and the approaching direction visually, rather than through numerical values, it is easier to understand the work place and the standing position for the work portion. In addition, it is possible to centrally manage information by concentrating information that is necessary for work portions at the display unit 14, which improves the efficiency in check work and change work.” Caputo – see page 203, 4th paragraph - At least one of the four risk indexes is in ‘High-Risk Area’: it is necessary to modify or provide a new WP design, based on the ergonomic feedback done by the previous iteration; page 203, Section 2.4, last paragraph – page 204, 1st paragraph – Process Simulate is a PLM software that allows to create a virtual scenario in which one or more workstations can be set; module “Human” allows to create a DHM (digital human model), named Jack, whose range of motions are natural; see page 211, Section 3.2, 4th paragraph - The evaluation shall be performed for the counter-reaction forces of tightening end that the worker needs to exert in a certain direction that depend on the geometry of the resource, the tightening torque and the assumed posture; see page 212, FIG. 12 – user can select and change location /position of forces and objects as well as load amounts in the user interface PNG media_image7.png 560 816 media_image7.png Greyscale see page 219, Conclusions, 1st paragraph - preventive performance evaluation of workplaces design is a formidable task to test the product feasibility since the design phase of a new product, giving the opportunity to reduce time and costs and the possibility to change design parameters without risks.), and wherein, based upon the traced movement locus input, the determination unit is configured to determine the approaching direction and the posture suitable for the simulation work to be executed on the movable work portion by the virtual character, and the output unit is configured to output the determined posture (Caputo – see page 201, FIG. 2 – WP (workplace) preliminary design, SOPs (standard operating Procedures), then virtual scenario setting, then task simulation by DHM (Digital Human Models), then numerical analysis (including Postures, forces, MMH (Manual material handling), and repetitive actions), then if values lower (medium/high risk), goes back to additional design, simulation, and numerical analysis until “low risk” reached and we have WP design validation; see page 202, 3rd paragraph - Once the WP is set, it is possible to use the DHM, customised according to the desired anthropometric measures, to simulate the operating task; See page 202, 4th paragraph - The ‘Force Solver’ tool, integrated in Tecnomatix Process Simulate software, enables to analyse the maximum force that a human model can exert in a posture. It allows specifying the posture and all input parameters. The analysis provides the maximum allowable force along a specified direction; see page 211, section 3.2, 1st paragraph - The ‘Force Solver’ tool, integrated in Tecnomatix Process Simulate software, enables to analyse the maximum force that a human model can exert in a posture. It allows specifying the posture and all input parameters. The analysis provides the maximum allowable force along a specified direction Leu – see page 800, col. 1, 4th paragraph - With motion capture and 3D visualization capabilities, interactions among products, processes and human operators can be analyzed and evaluated to identify potential problems during assembly, such as awkward postures, poor workcell layout, insufficient tools and fixtures and inability to access parts; see page 807, Section 4 – in virtual assembly, generating accurate positions and orientation to update virtual parts is important for generating realistic assembly simulation. See page 808, Section 5.1 – CAD systems used to model physical objects, while VR systems use scene graphs to animate the motion and interaction of CAD models, such as in virtual assembly; See FIG. 26 – CAD model based assembly tool and fixture design; includes process plan, Simulations, Instruction sheet, error prevention; see page 812, Col. 2 - Image-based instructions indicating the assembly operations can be stored in the assembly instruction database. At the same time, other means, such as video clips and graphical primitives in the form of short labels, text, and arrows that help the operator understand how to execute the assembly operations). Caputo, Meess, and Leu are analogous art as they are directed to having virtual objects in an environment, e.g. industrial/assembly/welding (See Caputo Abstract; Meess Abstract, par 103, FIG. 22; Leu Abstract). 1) Caputo discloses having “digital” and virtual model, and using simulation software (See page 197, 211). Caputo discloses creating virtual scenarios (See page 203, last paragraph) and having SOP (Standard operating procedures) that are simulated (See page 204, last paragraph; Table 5; Fig. 10). Meess improves upon Caputo by disclosing having a processor execute instructions and using a menu for selecting a welding procedure to perform (See par 71, 74, 98). One of ordinary skill in the art would be motivated to further having a processor and a menu and selections for a procedure to perform to efficiently improve upon the demonstration of a proposed, simulated procedure in Caputo (See page 204, 2nd paragraph). 2) Caputo discloses having Standard Operating Procedures with step-by-step instructions, depending on the component to be assembled, for assisting the workers in carrying out the working activity properly (See page 201, last paragraph – page 202, 1st paragraph). Meess discloses having text/instructional information for welding and welding procedures (See par 65, 98) and having multiple weld objects 904 to perform in a displayed view (See FIG. 22, par 106), displaying a variety of work information relative to portions of the object (See par 86, FIG. 18 – visual cues for specific welds – e.g. voltage, current, wire feed speed; FIG. 22 – has more on specific information for each weld), and selecting a welding procedure to perform and presenting information to a user based on a selection (See par 98, FIG. 20). Leu improves upon Caputo and Meess by disclosing switching to a simulated state showing a position of work to be performed along with information in graphical primitives, labels, text, and arrows (See page 812). One of ordinary skill in the art would be motivated to further provide orientation, work angle, travel angle for how to perform the tasks as well as instructions information relative to specific welds, and displaying multiple weld objects and information for the work portions in Meess and retrieving assembly information, image-based instructions, along with graphical primitives, labels, text, and arrows to help operators understand how to execute assembly operations as in Leu, to efficiently improve upon the simulations and standard operating procedures that “assist” workers in carrying out activities properly in Caputo. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the use of simulation of activities for workers to reduce risk in virtual scenarios in Caputo (See pages 201-202, FIGS. 2-3; page 208, Table 5 showing simulation frames), to further include considering orientation, work angle, and travel angle for a welding task/operation where multiple weld locations on a work object are displayed as disclosed in Meess, to further include switching to a simulated state showing a position of work to be performed along with information in graphical primitives, labels, text, and arrows as disclosed in Leu, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning independent claim 7, Caputo, Meess, and Leu disclose: A work simulation method of outputting a state in which a virtual character in a virtual space executes predetermined work on a work target object (Caputo [same as cl. 1] – page 202-203; page 206, Section 3.1.), the work simulation method causing a computer to execute processes comprising (Caputo [same as claim 1]– See page 196, last paragraph – page 197, 1st paragraph; see page 211, Section 3.2; see FIG. 12; Mess par 64, FIG. 13) the method comprising: The remaining limitations are similar to claim 1 above. Claim 7 is rejected for the same reasons. It would be obvious to combine Caputo, Meess, and Leu for the same reasons as claim 1. Concerning independent claim 13, Caputo, Meess, and Leu disclose: A work simulation method of outputting a state in which a virtual character in a virtual space executes predetermined work on a work target object (Caputo [same as cl. 1] – page 202-203; page 206, Section 3.1), the work simulation method causing a computer to execute processes (Caputo [same as claim 1]– See page 196, last paragraph – page 197, 1st paragraph; see page 211, Section 3.2; see FIG. 12; Mess par 64, FIG. 13) comprising: The remaining limitations are similar to claim 1 above. Claim 13 is rejected for the same reasons. It would be obvious to combine Caputo and Meess for the same reasons as claim 1. Concerning claims 2, 8, and 14, Caputo, Meess, and Leu disclose: The work simulation system according to claim 1, wherein the approaching direction is determined based on a content of the work (Caputo – see page 207 - The OWAS evaluation tool from Tecnomatix Process Simulate software enables to analyse operations according to predefined joint values. Running the simulation, it is easy to evaluate the different postures according to the OWAS method; see page 210, FIG. 10 – showing an approaching direction for posture of virtual character evaluated by OWAS; see page 212, FIG. 12 – showing an approach direction for the work of task number 7; see page 213, FIG. 14 showing an approach direction and force for the content of task number 14). see also Meess – see par 107 - visual cues related to the weld tool position, orientation and motion, e.g., CTWD, travel speed, work angle, travel angle and aim, can be, e.g., displayed on a fixed location on the display 441 and/or “attached” to the welding tool 480, as discussed above, to help the user in navigating the obstruction). It would be obvious to combine Caputo, Mess, and Leu for the same reasons as claim 1. Concerning claims 3, 9, and 15, Caputo, Meess, and Leu disclose: The work simulation system according to claim 2, wherein the approaching direction is determined further based on information on an obstacle that hinders the work (Meess – see par 105 - while virtual objects 900 can perform a similar function as visual cues 700, the virtual objects are virtual representations of objects, e.g., objects that can be found in a typical real-world weld environment (welds, walls, pipes, overhanging obstructions) or any other object that can be virtually generated to aid the user in weld training).; see par 107 - For example, as seen in FIG. 22, virtual object 908 represents an obstruction that welder may encounter in a real-world welding scenario; In some embodiments, visual and audio cues can be used to aid the user in navigating the obstruction. …; The simulated obstructions can simulate walls, ceilings, pipes, fixtures, and/or any other type of obstructions. … computer includes a database of predefined obstruction categories such as e.g., an overhanging obstruction over the weld joint, an obstruction between the user and the weld joint, and obstruction that is very close to the weld joint, and/or another type of obstruction, which can be uploaded, by e.g., the user and/or the instructor, into welding training sessions; see also Leu – see page 805, section 3.2.1 – Collision detection – A basis for planning and executing a virtual assembly operation is collision detection. The input is a set of objects (i.e., all objects in the scene graph) represented by CAD models, while the output is a set of intersecting or overlapping polygons. The ability of a VR system to simulate realistic object behavior at interactive frame rates is very important; thus, a collision detection algorithm should be able to compute the time and position of collision quickly, given the positions of moving objects as functions of time). It would be obvious to combine Caputo, Mess, and Leu for the same reasons as claim 1. Caputo discloses a variety of different objects in the simulation frames (See page 208, Table 5). Meess and Leu improves upon Caputo by disclosing having different scenarios/collision improving upon the simulation in Caputo. Concerning claims 4, 10, and 16, Caputo, Mess, and Leu disclose: The work simulation system according to claim 1, wherein when a work position at which the virtual character works on the work portion is defined as a start point and a position of the work portion is defined as an end point (Caputo page 213, FIG. 14; page 213, 2nd paragraph - Geometrical (distances) and physical (durations) data can be extracted from the simulation and, knowing the physical properties of the floor, it is possible to evaluate the initial and the sustained forces (in N) exerted by the worker by the Equations 5 and 6), the approaching direction is a direction from the start point to the end point (Meess – discloses entire claim as well as last limitation – see par 86, FIG. 18 - calculate the X and Y coordinates on the video image or display 441 where an object such as, e.g., welding tool 460, will be located, and the subsystem 410 can then map the visual cues 700 to appropriate X and Y coordinates on the video image or display 441 corresponding to the object; par 89, FIG. 18 - visual cues 700 can aid in identifying to the user the weld start position, the weld stop position and the total length of the weld. For example, in cases were the entire joint between two workpieces is not welded but only certain portions, the visual cues 700 can aid the user in identifying the start position by having a marker such as, e.g., a green spot at the start position and a second marker such as, e.g., a red spot at the stop position; visual cues 700 can alert the user, e.g., by displaying a message in text, e.g., in a corner of the display 441, or by using visual cues 700 to identify the start position, stop position and weld length; par 106, FIG. 22 - For example, the logic processor-based subsystem 410 can generate virtual weld objects 904 that visually show where the user should place the welds on a coupon/workpiece 480 A, B. Similar to the weld start/stop visual aids discussed above, the weld objects 904 show where the user should place the weld and how long the weld should be). It would be obvious to combine Caputo, Meess, and Leu for the same reasons as claim 1 and claim 3. Concerning claims 6, 12, and 18, Caputo and Meess and Leu disclose: The work simulation system according to claim 1, wherein the display unit is configured to display the posture for a work portion selected from the work portion list on a second display as a first display, and to display the work information for the selected work portion as a third display (Caputo – see page 213, FIG. 14 – showing hand and support in the “force solver” PNG media_image7.png 560 816 media_image7.png Greyscale see also Leu – see page 808, Section 5 – process of virtual Assembly (VA) planning should take various factors into consideration, including assembly sequence, tooling and fixture requirements, ergonomics; see page 811, section 6.2 – Workplace Design and Planning (WDP) aims to simplify the assembly process, through Design for Assembly (DFA) techniques; WDP involves workplace design, postural concerns, and workplace layout analysis; page 812, Col. 2 - an AR-assisted assembly system that incorporates Virtual Interaction Panels (VirIPs)…Their approach uses a visual assembly tree structure (VATS) to manage the assembly information and retrieve the relevant instructions for the assembly operators in the AR environment; At the same time, other means, such as video clips and graphical primitives in the form of short labels, text, and arrows that help the operator understand how to execute the assembly operations; see page 819, section 8.2.3 - fastening operation predominantly involves the upper body of the operator, so RULA is a useful tool for ergonomic analysis. RULA has been developed for use in ergonomic investigation of Workplaces. The RULA analysis can be used to determine the risk levels associated with particular postures and to suggest actions needed in order to reduce the risk of long-term ergonomic injuries and to design safer workplaces). It would be obvious to combine Caputo and Meess and Leu for the same reasons as claim 1. Concerning claim 20, Caputo, Meess, and Leu disclose: The work simulation system according to claim 1, wherein the approaching direction is determined based on a scoring process that evaluates candidate positions that are scored (Caputo – see page 201, FIG. 2 – WP (workplace) preliminary design, SOPs (standard operating Procedures), then virtual scenario setting, then task simulation by DHM (Digital Human Models), then numerical analysis (including Postures, forces, MMH (Manual material handling), and repetitive actions), then if values lower (medium/high risk), goes back to additional design, simulation, and numerical analysis until “low risk” reached and we have WP design validation) taking into consideration whether obstacles that may hinder the work are present (Meess – see par 105 - while virtual objects 900 can perform a similar function as visual cues 700, the virtual objects are virtual representations of objects, e.g., objects that can be found in a typical real-world weld environment (welds, walls, pipes, overhanging obstructions) or any other object that can be virtually generated to aid the user in weld training).; see par 107 - For example, as seen in FIG. 22, virtual object 908 represents an obstruction that welder may encounter in a real-world welding scenario; For example, the virtual object 908 can change colors whenever it “hit.” Preferably, the virtual obstruction object 908 is opaque. In some embodiments, visual and audio cues can be used to aid the user in navigating the obstruction. For example, visual cues related to the weld tool position, orientation and motion, e.g., CTWD, travel speed, work angle, travel angle and aim, can be, e.g., displayed on a fixed location on the display 441 and/or “attached” to the welding tool 480, as discussed above, to help the user in navigating the obstruction; The simulated obstructions can simulate walls, ceilings, pipes, fixtures, and/or any other type of obstructions. Preferably, the logic processor-based subsystem 410 and/or another computer includes a database of predefined obstruction categories such as e.g., an overhanging obstruction over the weld joint, an obstruction between the user and the weld joint, and obstruction that is very close to the weld joint, and/or another type of obstruction, which can be uploaded, by e.g., the user and/or the instructor, into welding training sessions; For the “scoring process” (See also Leu – page 809, col. 2, last paragraph – computer-aided assembly process planning (CAPP) system creates assembly plans using component global freedom that checks every component and in turn determines whether any blocking components lie in the possible assembly path). It would be obvious to combine Caputo and Meess and Leu for the same reasons as claims 1-3 above. In addition, Leu improves upon the analysis for postures and forces in Caputo considering optimization and cost functions and the navigation of obstructions in Mees by disclosing checking whether a component is blocking for a possible assembly path. Response to Arguments Applicant's arguments filed 11/21/25 have been fully considered but they are not persuasive and/or are moot in view of the new rejections. Applicant’s arguments are moot in light of the new rejections necessitated by the amendments. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN R GOLDBERG whose telephone number is (571)270-7949. The examiner can normally be reached 830AM - 430PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IVAN R GOLDBERG/Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Dec 04, 2023
Application Filed
Jun 12, 2025
Non-Final Rejection — §103
Jul 23, 2025
Examiner Interview Summary
Jul 23, 2025
Applicant Interview (Telephonic)
Jul 30, 2025
Response Filed
Sep 17, 2025
Final Rejection — §103
Nov 21, 2025
Request for Continued Examination
Dec 11, 2025
Response after Non-Final Action
Mar 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596970
SYSTEM AND METHOD FOR INTERMODAL FACILITY MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12591826
SYSTEM FOR CREATING AND MANAGING ENTERPRISE USER WORKFLOWS
2y 5m to grant Granted Mar 31, 2026
Patent 12586020
DETERMINING IMPACTS OF WORK ITEMS ON REPOSITORIES
2y 5m to grant Granted Mar 24, 2026
Patent 12579493
SYSTEMS AND METHODS FOR CLIENT INTAKE AND MANAGEMENT USING HIERARCHICAL CONFLICT ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12555055
CENTRALIZED ORCHESTRATION OF WORKFLOW COMPONENT EXECUTIONS ACROSS SOFTWARE SERVICES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
35%
Grant Probability
72%
With Interview (+36.9%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 365 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month