DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Claims 1-20 have been interpreted under 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) to not invoke 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) claim interpretation.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by
Kang et al., US Patent Application Publication No 2022/0055213 A1, hereinafter Kang.
Kang describes mapping a real world into a three-dimensional 3D virtual simulation of the real world and having colliders having level of detail LOD based on position/distance with regard to/from a robot element to an interactive element. Refer to the Abstract, paragraphs 16, 17, 61-63, 65-70, 97-100, and 106, claims 1, 2, and 8, and FIGs. 1-7 and 10. A detailed analysis of the claims and Kang follows.
Claim 1:
1. A method comprising:
rendering a three-dimensional (3D) environment with an interactive feature that is defined from a plurality of points (Kang: “mapped from a working environment in a reality” in the abstract and paragraphs [0016] and [0017]; “Therefore, the first phase of the method is to construct a three-dimensional digital virtual model for the robot arm, the factory working environment, and the materials in the reality world.” in paragraph [0062]; “The first phase is to set up a corresponding digital virtual model in the virtual working environment for the complete robotic production line in the reality working environment.” and “Therefore, when building the digital virtual model, all objects in the production line, such as, a material cutting machine, e.g., a CNC machine, a material transportation machine, e.g., a conveyor, a robot workstation, and even the ground surface of the plant in the reality working environment are correspondingly mapped and modelled in the format of 3D models to build the virtual working environment.” both in paragraph [0065]; and “and mapped from a working environment in a reality, in a robot simulator (step 501)” in FIG. 10 step 501 and paragraph [0097]; and “the end effector and a target object consisting of a plurality of basic members and mapped from a working environment in a reality, in a robot simulator” in paragraphs [0099] and [0106] and claims 1 and 8.);
determining a position of the interactive feature in the 3D environment (Kang: collision or no collision of for example robot and rack (FIG. 5(b)) or portion of rack (FIG. 5(c) and FIG. 5(d)) paragraph [0070] determines a position of the interactive feature in the 3D environment; and FIG. 10 step 505.);
generating a first collider with which to detect collisions with the interactive feature based on a first shape formed by a first set of points retained from decimating the plurality of points by a first amount in response to the interactive feature being at a first position in the 3D environment (Kang: robot colliding with for example cantilever rack discussed in paragraphs [0067]-[0070] and rack collider Low Level Of Detail in FIG. 5(b).);
generating a second collider with which to detect collisions with the interactive feature based on a second shape formed by a second set of points retained from decimating the plurality of points by a second amount in response to the interactive feature being at a second position in the 3D environment (Kang: robot colliding with for example cantilever rack discussed in paragraphs [0067]-[0070] and rack collider Medium or High Level Of Detail in FIG. 5(c) and FIG. 5(d).); and
detecting collisions with the interactive feature using the first collider when the interactive feature is at the first position in the 3D environment and using the second collider when the interactive feature is at the second position in the 3D environment (Kang: based on the distance between the initial position and the target position of the target object eg. No collision or collision discussed in paragraph [0070] start at Low Level Of Detail based on distance and if a collision then Medium Level Of Detail medium then to High Level Of Detail.).
Claim 2:
2. The method of claim 1, wherein the first amount is greater than the second amount,
wherein the first set of points comprises fewer points than the second set of points (Kang: Low Level Of Detail FIG. 5(b) has fewer points.), and
wherein the first shape contains fewer edges and contours than the second shape (Kang: Medium or High Level Of Detail FIG. 5(c) and FIG. 5(d) contains fewer edges and contours than the second shape.).
Claim 3:
3. The method of claim 1 further comprising:
dynamically associating the first collider to the interactive feature in response to detecting the interactive feature at the first position (Kang: Begin with Low Level Of Detail FIG. 5(b) when having a no collision position.); and
dynamically associating the second collider to the interactive feature in response to detecting the interactive feature at the second position (Kang: Collision Medium or High Level Of Detail FIG. 5(c) and FIG. 5(d) when having a collision position.).
Claim 4:
4. The method of claim 3,
wherein the first position corresponds to a particular range of depths in the 3D environment (Kang: Begin with Low Level Of Detail FIG. 5(b) when having a no collision position of which there would be a plethora of no collision positions which covers the claimed “a particular range of depths in the 3D environment”.)
or
the interactive feature being rendered at a particular range of sizes in the 3D environment (Kang: Perspective view illustrated in FIGs. 1-5(a) and 7-9(h) render the interactive feature at a particular range of sizes in the 3D environment based upon depth.); and
wherein dynamically associating the first collider comprises moving the second collider with the interactive feature in the 3D environment while the interactive feature remains at the particular range of depths or is rendered at the particular range of sizes (Kang: FIGs. 5(a)-5(d) illustrate cantilever rack with wheels discussed in paragraphs [0068] and [0069].).
Claim 5:
5. The method of claim 3, wherein dynamically associating the first collider comprises:
linking endpoints of the first collider to different points of the first set of points (Kang: for example cantilever rack discussed in paragraphs [0067]-[0070] and rack collider having Low Level Of Detail in FIG. 5(b) links endpoints of the cantilever rack.); and
moving the first collider with the interactive feature based on said linking (Kang: Since collider linked to cantilever rack then collider is linked to any movement of the cantilever rack.).
Claim 6:
6. The method of claim 1, wherein generating the first collider comprises:
defining a single shape that approximates the first shape formed by the first set of points (Kang: for example cantilever rack discussed in paragraphs [0067]-[0070] and rack collider having Low Level Of Detail in FIG. 5(b) defines a single shape that approximates the first shape formed by the first set of points.).
Claim 7:
7. The method of claim 1, wherein detecting the collisions comprises:
performing a first number of calculations to determine a collision between an object in the 3D environment and the first collider (Kang: paragraph [0068] describes Low Level Of Detail collider in FIG. 5(b) reduces computational cost which computational cost is a number of calculation.); and
performing a second number of calculations to determine a collision between the object and the second collider (Kang: paragraph [0068] describes Medium or High Level Of Detail collider increases computational cost which computational cost is a number of calculation having a second number of calculations different than the first number of calculations.),
wherein the second number of calculations is greater than the first number of calculations based on the second collider having a more complex shape or different shapes than the first collider (Kang: paragraphs [0068]-[0070] describe Medium or High Level Of Detail collider increases computational cost which computational cost is a number of calculation having a second number of calculations greater than the first number of calculations.).
Claim 8:
8. The method of claim 1,
wherein generating the first collider comprises defining a single shape that is within a threshold distance of the first shape formed by the first set of points (Kang: FIG. 5(b).); and
wherein generating the second collider comprises defining a plurality of shapes that collectively are within the threshold distance of the second shape formed by the second set of points (Kang: FIG. 5(c) and FIG. 5(d).).
Claim 9:
9. The method of claim 1,
wherein generating the first collider comprises defining a single simple shape corresponding to a cube, sphere, cone, truncated cone, cylinder, torus, pyramid, or cuboid that matches the first shape formed by the first set of points by a threshold amount (Kang: FIG. 5(b) and paragraphs [0067]-[0070].); and
wherein generating the second collider comprises defining two or more of the single simple shape to match the second shape formed by the second set of points by the threshold amount (Kang: FIG. 5(c), FIG. 5(d), and paragraphs [0067]-[0070].).
Claim 10:
10. The method of claim 1, wherein detecting the collisions comprises:
detecting the collisions by calculating a position of a collision element relative to a position of the first collider rather than a position of each point of the first set of points when the interactive feature is at the first position (Kang: FIG. 5(b) and paragraphs [0067]-[0070].).
Claim 11:
11. The method of claim 1 further comprising:
determining the first position of the interactive feature based on
a depth of the interactive feature in the 3D environment (Kang: 3D environment has depth thus cantilever rack’s position in FIG. 5(a) is based on depth in the 3D environment.)
or
an amount of the 3D environment that is occupied by the interactive feature (Kang: 3D environment has depth thus perspective view of cantilever rack’s position in FIG. 5(a) is based on depth in the 3D environment which is illustrated by size of cantilever rack in the perspective view.).
Claim 12:
The actor(s) of the claimed method is/are not defined, thus, the claimed steps cover machine or human or both machine and human as the actor(s). Refer to MPEP 2111-2111.05 Claim Interpretation; Broadest Reasonable Interpretation [R-10.2019].
12. The method of claim 1 further comprising:
determining an amount of resources that are available for generating the 3D environment (Kang: The designer when deriving the algorithm manifested in paragraphs [0067]-[0070] takes into account resources such as “computational cost of the collision check”.); and
increasing the first amount and the second amount of decimation in response to the amount of resources being less than a threshold amount (Kang: The designer when deriving the algorithm manifested in paragraphs [0067]-[0070] takes into account resources such as “computational cost of the collision check” when determining the appropriate decimation.).
Claim 13:
13. The method of claim 1, wherein detecting the collisions comprises:
providing a lower level of collision detection accuracy when detecting the collisions with the first collider (Kang: FIG. 5(b) and paragraphs [0067]-[0070] describe that FIG. 5(b) has lower accuracy compared to FIGs. 5(c) and 5(d).)
and
a higher level of collision detection accuracy when detecting the collisions with the second collider (Kang: FIG. 5(c) and FIG. 5(d) and paragraphs [0067]-[0070] describe that FIG. 5(c) and FIG. 5(d) have higher accuracy compared to FIG. 5(b).).
Claim 14:
14. The method of claim 1 further comprising:
performing a collision action in response to detecting a collision with one of the first collider or the second collider (Kang: FIG. 5(b), FIG. 5(c), and FIG. 5(d) and paragraphs [0067]-[0070].).
Claims 14-19:
Claims 14-19 are system claim version of method claims 1-5 and system claims 14-19 are rejected for the same reasons given above for method claims 1-5. Regarding in these system claims the claimed “one or more hardware processors configured to:” refer to discussion of cloud based computer-implemented system in paragraph [0041].
Claim 20:
Claim 20 is a non-transitory computer-readable medium claim version of method claim 1and non-transitory computer-readable medium claim 20 is rejected for the same reasons given above for method claim 1. Regarding in this non-transitory computer-readable medium claim the claimed “non-transitory computer-readable medium storing instructions that, when executed by one or more processors of a three-dimensional (3D) interactivity system, cause the 3D interactivity system to perform operations comprising:” refer to discussion of cloud based computer-implemented system in paragraph [0041]-[0045].
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Shchurko et al., US Patent No. 11,798,248, describes at least two colliders in column 3 line 34 to column 4 line 5:
The device 104 then uses the model 118 of eyewear and the model 120 of the user's 102 face to simulate a fit of the selected eyewear on the user's 102 face. In some embodiments, the device 104 attaches one or more colliders 122 to the model 118 of eyewear and colliders 124 to the model 120 of the user's face. The colliders 122 and 124 are three-dimensional, virtual, kinematic objects that are used to detect collisions between different virtual objects. For example, the colliders 122 and 124 may be kinematic cubes that attach to various surfaces in the model 118 of eyewear and the model 120 of the user's 102 face. The colliders 122 and 124 may be rigid or inelastic virtual objects that do not change shape when colliding with other objects in the virtual space. The colliders 122 and 124 may be any suitable size or shape that occupies volume in the virtual space. For example, the colliders 122 and 124 may be cubes, boxes, spheres, tubes, or cylinders. Additionally, the device 104 may add any suitable number of colliders 122 and 124 to the models 118 and 120.
Edelsbrunner et al., US Patent No. 8,004,517, describes in column 26 line 56 to column 27 line 3:
The memory requirements of polygonal models are much higher than those of point clouds with the same number of points. Very large point sets, often tens of millions of data points, are frequently necessary to perform accurate digital shape reconstruction. For computational and storage efficiency reasons, it is impractical to arrange all measured data points into a mesh format. However, when point clouds are filtered or meshes are decimated to reduce data structure size, important surface information may be lost and computational accuracy may be reduced. To address these limitations associated with using polygonal models, alternative point cloud-based embodiments of the invention will be described that limit loss of object surface information due to decimation, produce much "lighter" models and increase computational efficiency even for very large data sets.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEFFERY A BRIER whose telephone number is (571)272-7656. The examiner can normally be reached on Mon-Fri from 8:30am-3:00pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao M Wu, can be reached at telephone number 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
JEFFERY A. BRIER
Primary Examiner
Art Unit 2613
/JEFFERY A BRIER/Primary Examiner, Art Unit 2613