DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . It is responsive to the submission dated 05/24/2024. Claims 1-19 are presented for examination.
Information Disclosure Statement
2. The information disclosure statements (IDSs) submitted on 05/24/2024 are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
5. Claims 1-4, 10-13, 15-19 are rejected under 35 U.S.C. 102(a)(a1) as being anticipated by Borduas (US 11900537).
Considering claim 1, Borduas discloses an information processing apparatus (e.g., an automated system 100 for constraining shape deformation of 3D objects; see fig. 1 and col. 7 lines 44-45) comprising: one or more processors (item 120 or 124, fig. 1) configured to:
acquire geometric inclusion relationship of three-dimensional models that constitute a scene represented by computer graphics (e.g., part processing, a first step of the method is to provide an input consisting of the 3D object 210 designed beforehand with computer-aided design (CAD) software (or, alternatively, obtained through a 3D scan of the object). In one or more embodiments, this 3D object 210 may be topologically simplified by the algorithms (to create the topologically simplified 3D object) before further processing is applied. A simplification process is detailed herein. Definition of N parts: The method may be provided, by a user, an input consisting of the number of parts (N) that should define the 3D object 210. See col. 8 lines 55-67);
acquire operation information from a user (e.g., The method may be provided, by a user, an input consisting of the number of parts (N) that should define the 3D object 210. See col. lines 65-68);
determine a three-dimensional model to be simplified among the three-dimensional models that constitute the scene, and simplification processing to be performed on the three-dimensional model to be simplified, based on the inclusion relationship and the operation information (e.g., If N differs from the number of parts automatically detected and forming the 3D object 210, the parts are topologically simplified by a topological simplification algorithm described in detail in the following steps. Referring to FIG. 3, the 3D object 310 is divided into parts 320 and 330 when using N=2. For each of the N parts of the 3D object, the simplification process may be performed based on a number of topological branches defined by the user. The selection of the number of branches may follow a number of branches which can be independently manipulated. This topological simplification process may be executed to automate the selection of the different constraint zones, allowing for an easy and automated integration of a variety of 3D objects by providing a correspondence between similar 3D objects (or products). This correspondence may be used such that once a first 3D object has been cut (using cuts) into different constraint zones (becoming the 3D model), a second 3D object may inherit its cuts (if topologically similar), and thus an equivalent positioning of the constraint zones. Additionally, two 3D objects (a first 3D object and a second 3D object), defined by a corresponding number of parts and number of branches may have interchangeable parametrization allowing for the mapping of delimited areas and scalar field between the two 3D objects. See col. 9 lines 1-30); and
perform the determined simplification processing on the three-dimensional model to be simplified (e.g., The topological simplification algorithm computes a dilated surface of a 3D object, generally preserving the initial geometry while having a simplified topology. The algorithm performs a series of steps to achieve topological simplification. See col. 9 lines 31-35).
Claim 2 is a method claim performing the system of claim 1. It is therefore rejected under the same rationale as claim 1.
As per claim 3, Borduas discloses the simplification processing is one or more of deletion processing for deleting or hiding a three-dimensional model, combination processing for combining three-dimensional models, and simplification processing for simplifying a three-dimensional shape. See col. 9 lines 15-30 and col. 11 line 35 to 12 line 20.
As per claim 4, Borduas discloses in the acquiring the geometric inclusion relationship, a tree structure including the three-dimensional models as nodes (e.g., branches) is acquired based on the geometric inclusion relationship of the three-dimensional models that constitute the scene (e.g., the simplification process may be performed based on a number of topological branches defined by the user. The selection of the number of branches may follow a number of branches which can be independently manipulated. This topological simplification process may be executed to automate the selection of the different constraint zones, allowing for an easy and automated integration of a variety of 3D objects by providing a correspondence between similar 3D objects (or products). This correspondence may be used such that once a first 3D object has been cut (using cuts) into different constraint zones (becoming the 3D model), a second 3D object may inherit its cuts (if topologically similar), and thus an equivalent positioning of the constraint zones. Additionally, two 3D objects (a first 3D object and a second 3D object), defined by a corresponding number of parts and number of branches may have interchangeable parametrization allowing for the mapping of delimited areas and scalar field between the two 3D objects. See col. 9 lines 1-30 and col. 11 line 36 to col. 12 line 20).
As per claim 10, Borduas discloses in the acquiring the geometric inclusion relationship, the tree structure is acquired for each of sub-scenes included in the scene (e.g., For each of the N parts of the 3D object, the simplification process may be performed based on a number of topological branches defined by the user. The selection of the number of branches may follow a number of branches which can be independently manipulated. This topological simplification process may be executed to automate the selection of the different constraint zones, allowing for an easy and automated integration of a variety of 3D objects by providing a correspondence between similar 3D objects (or products). See col. 9 lines 10-25).
As per claim 11, Borduas discloses each of the sub-scenes (e.g., zones or volumetric elements) where the tree structure (e.g., connect parts or elements of the simplified 3D object) is acquired is a scene of each coordinate system that is movable with movement of the three-dimensional model in the scene (e.g., Borduas discloses the volume of the vacuum wrap surface is meshed as soon as the vacuum wrap has been created. …. The result is the vacuum wrap volumetric mesh. The vacuum wrap volumetric mesh can be used to compute another set of poly-harmonic weights (e.g., generalized barycentric coordinates weight, local barycentric coordinates weight) that can be identified as solid poly-harmonic weights. These weights link the topologically simplified 3D object with the 3D object so that when the optimal transformation of the topologically simplified 3D object (to respect all criteria) is identified, the transformation can be applied back to the 3D object. This may be done just once after the topologically simplified 3D object is created and has been meshed volumetrically. One of two methods consisting of a poly-harmonic surface transform and a deformation using the spatial poly-harmonic weights may be used to create a continuous surface over the output surface of the vacuum wrap algorithm, providing a surface respecting the imposed delimitation of the zones of the topological rig algorithm (and zone selection) while having a continuous surface. See col. 17 line 2 to col. 18 line 53).
As per claim 12, Borduas discloses in the acquiring the geometric inclusion relationship, the tree structure is acquired based on change of a moving position of the three-dimensional model moving in the scene (e.g., The intrinsic non-rigid zone creates a smooth transition between the extrinsic rigid and non-rigid zones while trying to keep the engineering constraints, guarantying the continuity of surface and forbidding any self-interference. The method involves a weight calculation of the intrinsic non-rigid zones as described herein: Using all controls points from the XR and XNR zones, the spatial poly-harmonic weights of each point of the volume are calculated, allowing a deformation that is by nature without any self-interference and that respects the continuity of surfaces. See col. 17 lines 2-13).
As per claim 13, Borduas discloses in the acquiring the geometric inclusion relationship, the tree structure is acquired based on a number of overlaps of voxels of the three-dimensional models in a case where the three-dimensional models that constitute the scene are voxelized and the voxels are arranged in a three-dimensional space (for example, Borduas discloses The volume of the topologically simplified 3D object may be meshed with volumetric elements, such as tetrahedral elements, hexahedral elements, and voxels, in an acceptable initial manner such that information of the in-thickness field is added to a volumetric mesh of the vacuum wrap. More generally, the 3D object may be used directly—in general, the vacuum wrap is an optimization over using the 3D object directly. See col. 17 lines 51-65).
As per claim 15, Borduas discloses in the acquiring the operation information, the operation information is acquired using an interactive user interface. See col. 7 lines 24-67.
As per claim 16, Borduas discloses in the acquiring the operation information, at least one of information designating a position of a viewpoint for observing the scene and information designating a three-dimensional model that is possibly changed in at least any of a position, a posture, and a size among the three-dimensional models that constitute the scene is acquired as the operation information (for examples, Borduas discloses: The extrinsic rigid zones may be positioned in space according to a rig file of the 3D target object (or target 3D scan), where the rig file is a markup language-defined file forming a coordinate system in the form of joints and elements. Alternatively, the extrinsic rigid zones may be positioned using a manual transformation (e.g., by a user). The rig file may also be constructed with parametrized joints and parent-child dependencies between the joints and the elements, allowing for parameter-controlled positioning of the extrinsic rigid zones. The rig file can also be used for controlling the location of constraints of the extrinsic non-rigid zones. (88) The rig file may undergo a step called “scaling” to better fit the geometry of the target object. The scaling is constrained by the presence of points, referred to as markers (or landmarks). The location of the markers can be guided by a user, the topological rig, post-processing on the topological rig, an AI, or a geometrical analysis of the 3D object and its centerline. For example, the rig for the knee of a child can be scaled to be anatomically suited to the knee of an adult. The scaling may also do the translation and finding of the parameters of the rig to fit the target object. The scaling may be implemented, for example, using the Scale tool provided in the OpenSIM library. See col. 13 line 47 to col. 14 line 10).
As per claim 17, Borduas discloses in the acquiring the operation information, at least any of a trajectory of a position of a viewpoint by a user having previously experienced the scene and an operation history of the user to the three-dimensional models is acquired as the operation information. See col. 12 lines 35-68 and col. 14 line 38 to col. 15 line 5.
As per claim 18, Borduas discloses acquiring a three-dimensional model of a real object in a real space (e.g., generating a topological simplified 3D object composed of connected structure in 3D space), wherein, in the acquiring the geometric inclusion relationship, geometric inclusion relationship between the three-dimensional models of the computer graphics and the three-dimensional model of the real object is also acquired (e.g., by positioning zones of identified structures or elements of 3D target object to fit onto 3D model), and wherein the simplification processing determined in the determining includes processing for deleting or hiding any of the three-dimensional models of the computer graphics based on the three-dimensional model of the real object. See col. 8 line 57 to col. 9 line 30.
Claim 19 is rejected under the same rationale as claim 1. Additionally, Borduas discloses a non-transitory computer-readable storage medium storing a program that causes a computer to execute an information processing method. See col. 7 line 21 to col. 8 line 38.
Allowable Subject Matter
5. Claims 5-9 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, because the prior art of record fail to teach the information processing method according to claim 4, wherein the operation information includes information designating a position of a viewpoint for observing the scene, and wherein, in the determining, a node of the viewpoint is added to the tree structure based on the position of the viewpoint, and among the three-dimensional models that constitute the scene, a three-dimensional model possibly coming into a visual field of the viewpoint is specified based on relationship between the nodes of the three-dimensional models and the node of the viewpoint in the tree structure (as recited in claim 5); and the information processing method according to claim 13, wherein, in the acquiring the geometric inclusion relationship, the tree structure is acquired by directly connecting a three-dimensional model having a voxel in which the number of overlaps is one, to a root node of the tree structure, and among three-dimensional models sharing a voxel in which the number of overlaps is two or more, determining a three-dimensional model having a voxel in which the number of overlaps is smaller by one, as a parent node, and determining a three-dimensional model not determined as the parent node, as a child node (as recited in claim 14).
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sakairi et al. (US 20140058708) discloses a computer-implemented method of simplifying a complex part in a geometric model by approximating its shape, including creating a two-dimensional cross-sectional plane through the complex part in at least two of three mutually orthogonal axes of the part, to give two or three planes, each reproducing the shape of the complex part at the cross section; and performing a combination operation on the planes to form a new body from the planes.
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESNER SAJOUS whose telephone number is (571) 272-7791. The examiner can normally be reached on M-F 10:00 TO 7:30 (ET).
Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice or email the Examiner directly at wesner.sajous@uspto.gov.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached on 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WESNER SAJOUS/Primary Examiner, Art Unit 2612
WS
02/07/2026