DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . It is responsive to the submission dated 09/30/2025. Claims 1-16 and 18-21 are presented for examination, of which, claims 1, 18 and 19 are independent claims.
Information Disclosure Statement
2. The information disclosure statements (IDSs) submitted on 08/23/2024, and 07/15/2025 are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner.
Claim Rejections - 35 USC § 101
3. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claim 19 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim 19 recites a storage medium storing thereon computer-executable instructions to be executed by a computer to perform image data processing acts.
However, paragraph 148 of the original disclosure describes the "storage medium” to having an open-ended meaning that includes “any tangible medium that contains … program may be used by or in combination with an instruction execution system, apparatus, or device. In the present disclosure, the computer-readable signal medium may include a data signal that propagates in a baseband or as a part of a carrier and carries thereon a computer-readable program code. The data signal propagating in such a manner may take a plurality of forms, including but not limited to an electromagnetic signal, an optical signal, or any appropriate combination thereof. The computer-readable signal medium may also be any other computer-readable medium than the computer-readable storage medium. The computer-readable storage medium may send, propagate or transmit a program used by or in combination with an instruction execution system, apparatus or device. The program code included on the computer-readable medium may be transmitted by using any suitable medium, including but not limited to an electric wire, a fiber-optic cable, radio frequency (RF) and the like, or any appropriate combination thereof.”
Thus, the features of claim 19 pertains to a signal per se. According to MPEP § 2111.01, the Examiner is obligated to give the terms or phrases their broadest interpretation definition awarded by one of an ordinary skill in the art unless applicant has provided some indication of the definition of the claimed terms or phrases. Therefore, it is not clear that the claimed "storage medium" is non-transitory or not, and the Examiner interprets the storage medium to include any type of medium which includes a carrier wave medium such as signals. Signals are directed to a non-statutory subject matter. Thus, claim 19 contains non-statutory subject matter.
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1-8, 12, 14-16 and 18-21 are rejected under 35 U.S.C. 103 as being unpatentable over du et al. (CN 112348958 A) in view of Chen et al. (CN 109741404 B).
Considering claim 1, Du discloses a data processing method (for example, Du discloses a collecting method, device, and system of key frame image, and a three-dimensional reconstruction method of the key frame image. See abstract), comprising:
starting an acquisition unit upon receiving an instruction for acquiring a three-dimensional model corresponding to a target object (for example, Du discloses: Step 110, determining point cloud data of a target object; step 120, establishing a three-dimensional model enclosing a target object; and determining 130 a keyframe image. In some embodiments, the object to be scanned, i.e., the target object, can be placed on a suitable plane (e.g., table top, ground, etc.), and the target object is aimed by a user holding a device such as a cell phone or tablet with a depth camera to capture corresponding RGB-D data. See the detailed description of figure 1 step S110, paras. 1-4, under the ” Specific implementation examples” section of Du);
generating[, in response to meeting a preset acquisition condition,] a three-dimensional grid to be processed enclosing the target object, and displaying the three-dimensional grid to be processed [on a display interface of a device to which the acquisition unit belongs] (e.g., step S120 of Du discloses: a three-dimensional model capable of enclosing the target object is established according to the position information of the point cloud data of the target object, the internal parameters of the image acquisition device, and the external parameters at the current photographing position. The surface of the three-dimensional model is divided into a plurality of meshes, each mesh corresponding to one shooting position. For example, upon obtaining a plane in which the target object is placed, the three-dimensional hemispherical model may be rendered to a corresponding location on that plane, such that the three-dimensional hemispherical model is able to cage the target object therein. See the detailed description of step S120, paras. 1-7, under the ” Specific implementation examples” section of Du); and
adjusting target display information of a corresponding sub-grid (corresponding to a set vertex or a mesh of a grid in the 3D model) in the three-dimensional grid to be processed based on a relative acquisition angle between the acquisition unit and the three-dimensional grid to be processed (for example, Du discloses: multiple sets of vertices on the surface of the three-dimensional model may be provided, and each set of vertices may determine a mesh. The coordinates of a certain set of vertices are M, then gl_Position=PxVxM for rendering. Based on the corresponding gl_Position values for each set of vertices, a three-dimensional model of the surface divided into a plurality of meshes can be established. These meshes corresponding to different shot locations as part of the 3D Model UI may serve as a basis for screening of keyframe images and may also guide the acquisition of keyframe images covering the full range of angles. In some embodiments, in response to determining the keyframe image of the current shot position, a second tagging process is performed on the corresponding grid of the current shot position on the three-dimensional model to identify that the keyframe image of the current shot position is captured. For example, if the clarity is greater than a threshold, then save it as a keyframe image, and changing a color of the respective mesh of the keyframe image, this suggests that a keyframe image has been acquired at the current angle may be moved to the next shot position to continue acquisition. See the detailed description of fig. 2, step S130, and the detailed descriptions of fig. 3, steps S1310-S1360, under the ”Specific implementation examples” section of Du).
Although Du discloses substantial features of the claimed invention, Du lacks the details for displaying the three-dimensional grid to be processed on a display interface of a device to which the acquisition unit belongs when the preset collection condition is met when the instruction to determine the three-dimensional model corresponding to the target object is received.
Based on the distinguishing technical features described above, the technical problem actually solves by this claim is how to improve the user's usage experience. With respect to the above distinguishing technical features, Chen et al., (CN109741404 A), discloses a mobile device-based light-field acquisition method and specifically discloses at step 103 the size and center position of the acquisition target are specified. In this step, the user specifies the spatial position and coarse size of the acquisition target object on the identified plane by clicking, dragging, or the like on the touch screen. Preferably, the step 103 comprises specifying an acquisition target initial position by means of a single click on the touch screen, calculating a three-dimensional intersection of the click position on the closest main plane to the camera, based on the screen coordinates of the user's contact and the camera internal and external parameters, and the primary plane position and size, and setting the closest plane as the current plane; a uniform triangulated mesh 3D hemisphere is drawn on the screen, centered on the 3D intersection point, with a suitable initial radius set according to the camera distance to the 3D intersection point (corresponding to "when instructions are received to determine the 3D model corresponding to the target object on the display interface of the device to which the capturing device belongs, when instructions are received to determine the 3D model corresponding to the target object"). See the detailed description of fig. 1, step S103, paras. 7-10, under the ”Specific implementation examples” section of Chen.
Therefore, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the teachings of Du to include displaying the three-dimensional grid to be processed on a display interface of a device to which the acquisition unit belongs when the preset collection condition is met when the instruction to determine the three-dimensional model corresponding to the target object is received, in the same conventional manner as taught by Chen; in order to guiding the user of the device to efficiently finish the light field acquisition process and capture the target subject. See para. 1 under the “Contents of the Invention” section of Chen. Further, the role played by the above-described technical features in the Chen reference is the same as the role played by the present invention, including solving the problem of how to improve the user's experience.
As per claim 2, Du, as modified by Chen, discloses the preset acquisition condition comprises at least one of: the target object being present in a field of view of the acquisition unit; an acquisition control being triggered (e.g., Du discloses: in the collecting process, the user can hand-hold the mobile phone or flat computer with depth camera to scan a circle around the target object, collecting a certain number of depth image at each shooting position. Thus, the collected depth image can cover each angle of the object and can select a sufficient depth image as the key frame image in a certain shooting angle range, so as to reduce the data redundancy. Moreover, it can synchronously determine the key frame image from the depth image obtained from each shooting position, so as to improve the efficiency of the key frame image collection; further providing visual feedback to the user, the user can instantly know whether the corresponding shooting position and angle has been collected to the key frame image, so as to ensure the key frame image can cover the complete angle range). See paras. 8-10, under the ”Specific implementation examples” section of Du.
As per claim 3, Du discloses determining a center point corresponding to the three-dimensional grid to be processed based on the target object; and generating and displaying the three-dimensional grid to be processed enclosing the target object based on the center point (e.g., determining a plane in which a target object lies by performing plane fitting according to point cloud data of the target object; from the location of the target object on the plane, a three-dimensional model is created. For example, from the location of the target object on the planar model, the circle center location is determined; from the circle center positions, a three-dimensional hemispherical model is built on the planar model as a three-dimensional model. See paras. 5-6 under the “Contents of the Invention” section of Du). And, Du lacks the details of but Chen discloses generating, in response to meeting a preset acquisition condition, a three-dimensional grid to be processed enclosing the target object, and displaying the three-dimensional grid to be processed on a display interface of a device to which the acquisition unit belongs (e.g., at step 103 the size and center position of the acquisition target are specified. In this step, the user specifies the spatial position and coarse size of the acquisition target object on the identified plane by clicking, dragging, or the like on the touch screen. Preferably, the step 103 comprises specifying an acquisition target initial position by means of a single click on the touch screen, calculating a three-dimensional intersection of the click position on the closest main plane to the camera, based on the screen coordinates of the user's contact and the camera internal and external parameters, and the primary plane position and size, and setting the closest plane as the current plane; a uniform triangulated mesh 3D hemisphere is drawn on the screen, centered on the 3D intersection point, with a suitable initial radius set according to the camera distance to the 3D intersection point (corresponding to "when instructions are received to determine the 3D model corresponding to the target object on the display interface of the device to which the capturing device belongs, when instructions are received to determine the 3D model corresponding to the target object"). See the detailed description of fig. 1, step S103, paras. 7-10, under the ”Specific implementation examples” section of Chen).
Thus, the combination of Du and Chen obviously encompasses the features of claim 3. As, the role played by the above-described technical features in the Du and Chen references is the same as the role played by the present invention, including solving the problem of how to improve the user's experience. Thus, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the teachings of Du and Chen to yield the features of claim 3, as do so would provide the benefit of guiding the user of the device to efficiently finish the light field acquisition process and capture the target subject.
As per claim 4, Chen, as modified by Du, discloses the determining a center point corresponding to the three-dimensional grid to be processed based on the target object by determining the center point based on a visual angle of the acquisition unit and a plane in which the target object resides; or by determining the center point of the three-dimensional grid to be processed based on a triggering operation on the display interface (e.g., at step 103 the size and center position of the acquisition target are specified. In this step, the user specifies the spatial position and coarse size of the acquisition target object on the identified plane by clicking, dragging, or the like on the touch screen. Preferably, the step 103 comprises specifying an acquisition target initial position by means of a single click on the touch screen, calculating a three-dimensional intersection of the click position on the closest main plane to the camera, based on the screen coordinates of the user's contact and the camera internal and external parameters, and the primary plane position and size, and setting the closest plane as the current plane. See the detailed description of fig. 1, step S103, paras. 7-10, under the ”Specific implementation examples” section of Chen).
As per claims 5 and 6, Du discloses the three-dimensional grid to be processed is a hemispherical grid; and the generating and displaying the three-dimensional grid to be processed enclosing the target object based on the center point comprise: determining and displaying the three-dimensional grid to be processed based on an adjustment operation for the grid to be used (e.g., determining a plane in which a target object lies by performing plane fitting according to point cloud data of the target object; from the location of the target object on the plane, a three-dimensional model is created. For example, from the location of the target object on the planar model, the circle center location is determined; from the circle center positions, a three-dimensional hemispherical model is built on the planar model as a three-dimensional model. And, after obtaining the target object placed plane, the three-dimensional hemisphere model can be rendered to the corresponding position on the plane (such as using openGL for efficient rendering), so that the three-dimensional hemisphere model can cover the target object in it. See the detailed description of step S120, paras. 3-8, under the ”Specific implementation examples” section of Du).
Du fails lacks the details for but Chen discloses: determining a radius length of the three-dimensional grid to be processed based on a display size of the target object in an image showing region to determine and display the three-dimensional grid to be processed based on the radius length and the center point; drawing a hemispherical grid corresponding to the target object as a grid to be used based on a preset radius length and the center point (e.g., at step 103, the size and center position of the acquisition target are specified. In this step, the user specifies the spatial position and coarse size of the acquisition target object on the identified plane by clicking, dragging, or the like on the touch screen. Preferably, the step 103 comprises specifying an acquisition target initial position by means of a single click on the touch screen, calculating a three-dimensional intersection of the click position on the closest main plane to the camera, based on the screen coordinates of the user's contact and the camera internal and external parameters, and the primary plane position and size, and setting the closest plane as the current plane; a uniform triangulated mesh 3D hemisphere is drawn on the screen centered on the 3D intersection, setting the appropriate initial radius according to the camera distance to the 3D intersection. Computing a new radius of the three-dimensional mesh sphere based on the change in screen coordinate distance of the two contacts and updating the radius of the sphere drawn on the screen based on the radius; the user makes a three-dimensional mesh sphere approximately surrounding the acquisition target object by adjusting this radius. At step 104, the acquisition is initiated, the viewpoint coverage and density are automatically calculated during the user's movement of the camera, and images are automatically taken in place. Preferably, the step 104 comprises recording an initial acquisition radius, with the acquisition target object center as a sphere, constructing an acquisition sphere with an acquisition radius. See the detailed description of fig. 1, steps S103-104, paras. 7-13, under the ”Specific implementation examples” section of Chen).
As such, it is submitted that the combination of Du and Chen obviously encompasses the features as defined in claims 5 and 6. See rationales above with respect to claims 1 and 3 for reasons of obviousness.
As per claims 7 and 8, Du discloses determining identification information of each sub-grid (e.g., keyframe identified and tagged on the grid of 3D model) in the three-dimensional grid to be processed, and determining a material parameter of each sub-grid; and setting a map for each sub-grid in the three-dimensional grid to be processed (e.g., in some embodiments, in response to determining a keyframe image of a current photographing position, a second tagging process is performed on a corresponding grid of the current photographing position on the three-dimensional model, and the keyframe image for identifying the current photographing position is captured. For example, if the sharpness is greater than a threshold, it is saved as a keyframe image, and the color of the corresponding grid of that keyframe image is changed, suggesting that the keyframe image has been acquired at the current angle, which may be moved to the next shooting position to continue to collect. when all the angles are collected to the key frame image, all the grids are changed the color, indicating that the key frame image covering the complete angle range has been obtained; the scanning can be finished according to the requirement. See the detailed description of fig. 3, steps S1320-S1360, paras. 1-13, under the ”Specific implementation examples” section of Du).
As per claim 12, Du discloses the target display information is distinct from original display information of the three-dimensional grid to be processed; and the target display information comprises color information or pattern information (for example, Du discloses the three-dimensional model rendered in the vicinity of the target object can be used as the key to collect UI guide of the image; and in response to an image acquisition apparatus acquiring a multi-frame depth image at a current photographing position, performing a first marking process on a corresponding mesh of the current photographing position on a three-dimensional model. For example, when the depth camera is aligned at a certain shot angle, the corresponding mesh may be lit (e.g., changed color, etc.) to indicate the coverage angle of the currently captured keyframe image. See the detailed description of fig. 3, steps S1360, paras. 1-3, under the ”Specific implementation examples” section of Du).
As per claim 14, Du discloses adjusting acquisition progress information of the target object on the display interface upon detecting that the display information of the corresponding sub-grid is adjusted as the target display information (e.g., the three-dimensional model rendered in the vicinity of the target object can be used as the key to collect UI guide of the image; and in response to an image acquisition apparatus acquiring a multi-frame depth image at a current photographing position, performing a first marking process on a corresponding mesh of the current photographing position on a three-dimensional model. For example, when the depth camera is aligned at a certain shot angle, the corresponding mesh may be lit (e.g., changed color, etc.) to indicate the coverage angle of the currently captured keyframe image. See the detailed description of fig. 3, steps S1360, paras. 1-3, under the ”Specific implementation examples” section of Du).
As per claim 15, Du discloses determining that acquisition on the target object is completed and three-dimensional data of the target object is obtained upon detecting that the display information of each sub-grid in the three-dimensional grid to be processed is the target display information (e.g., In response to determining the keyframe image of the current shooting position, a second tagging process is performed on the corresponding grid of the current shooting position on the three-dimensional model to identify that the keyframe image of the current shooting position is completed. If the sharpness is greater than the threshold, then it is saved as a keyframe image, and the color of the corresponding grid of that keyframe image is changed, suggesting that the keyframe image has been captured at the current angle, which can be moved to the next capture location to continue capturing. When keyframe images have been acquired at all angles, all meshes have changed color, stating that keyframe images covering the full range of angles have been acquired, scanning can be completed as needed. See the detailed description of fig. 3, steps S1360, paras. 1-3, under the ”Specific implementation examples” section of Du).
As per claim 16, Du discloses constructing and displaying a three-dimensional model corresponding to the target object based on the three-dimensional data, to determine images of the target object at different visual angles based on a triggering operation for the three-dimensional model (e.g., Du discloses at step 410, keyframe images of a subject object at each capture location are acquired using the keyframe image acquisition method according to any one of the embodiments described above. In step 420, a three-dimensional reconstruction of the target object is performed according to the keyframe images of the target object at each captured location (e.g., viewing angle). In this way, the key frame image covering the complete angle range is obtained, so it can improve the effect of three-dimensional reconstruction. See the detailed description of fig. 4, steps S410-S420, in view of the descriptions of fig. 3, under the ”Specific implementation examples” section of Du).
The invention of claim 18 contains features that correspond in scope with the limitations recited claim 1. As the limitations of claim 1 were found obvious over the combined teachings of Du and Chen, it is readily apparent that the applied prior arts perform the underlying elements. As such, the limitations of claim 18 are, therefore, subject to rejections under the same rationale as claim 1. In addition, Du discloses an electronic device (800, fig. 18), comprising: one or more processors (820); and a memory (810) comprising at least one program (e.g., instructions or application programs) that, when executed by the one or more processors, cause the electronic device to carry out functions. See the detailed descriptions of fig. 8 and also fig. 9 of Du.
The subject-matters of independent claim 19 corresponds in terms of a computer readable medium to those of independent method claim 1 or device claim 18, and the rationales raised above to reject claims 1 and 18 also apply to claim 19.
Claim 20 is rejected under the same rationale as claim 19.
Claim 21 is rejected under the same rationale as claim 2.
Allowable Subject Matter
7. Claims 9-11 and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, because the prior art of record fail to teach the data processing method according to claim 1, wherein the adjusting target display information of a corresponding sub-grid in the three-dimensional grid to be processed based on a relative acquisition angle between the acquisition unit and the three-dimensional grid to be processed comprises: adjusting a relative acquisition angle between the acquisition unit and the target object, and determining an acquisition position of the acquisition unit; determining a target sub-grid to be processed based on the acquisition position and display information of each sub-grid in the three-dimensional grid to be processed; and determining a projection center point of the acquisition unit in a plane in which the target sub-grid to be processed resides, and adjusting display information of the target sub-grid to be processed to the target display information upon detecting that the projection center point is within a preset projection threshold range, wherein the preset projection threshold range is determined based on a grid center point of a corresponding sub-grid to be processed (as recited in claim 9), wherein after the adjusting target display information of a corresponding sub-grid in the three-dimensional grid to be processed, further comprising: correspondingly sending a serial number of the target sub-grid to be processed whose display information is updated as the target display information and acquired object data to a target device (as recited in claim 13).
Conclusion
8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Holzer et al. (US 11972556) discloses a background scenery portion may be identified in each of a plurality of image sets of an object, where each image set includes images captured simultaneously from different cameras. A correspondence between the image sets may be determined, where the correspondence tracks control points associated with the object and present in multiple images. A multi-view interactive digital media representation of the object that is navigable in one or more dimensions and that includes the image sets may be generated and stored.
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESNER SAJOUS whose telephone number is (571) 272-7791. The examiner can normally be reached on M-F 10:00 TO 7:30 (ET).
Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice or email the Examiner directly at wesner.sajous@uspto.gov.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached on 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WESNER SAJOUS/Primary Examiner, Art Unit 2612
WS
01/31/2026