Prosecution Insights
Last updated: April 19, 2026
Application No. 18/684,278

METHODS FOR GENERATING A PARTIAL THREE-DIMENSIONAL REPRESENTATION OF A PERSON

Non-Final OA §103
Filed
Feb 16, 2024
Examiner
NGUYEN, PHU K
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Mport Ltd.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
1019 granted / 1184 resolved
+24.1% vs TC avg
Moderate +7% lift
Without
With
+7.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
1224
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
66.6%
+26.6% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
4.6%
-35.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1184 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-26 are rejected under 35 U.S.C. 103 as being unpatentable over MEDIONI et al (US 2013/0187919) in view of Evangelidis et al (Joint alignment of multiple point sets with batch and incremental expectation-maximization). As per claim 1, Medioni teaches the claimed “computer-implemented method for generating a partial three-dimensional (3D) representation of a person,” the method comprising: “obtaining depth data of the person captured from a stationary depth camera scanning around the person” (Medioni, [0022] - The sensor 100 can provide both a standard RGB image and a depth image containing the 3D information at 30 frames per second in Video Graphics Array (VGA) format; [0024] - FIG. 1B shows the basic posture 120 of a person for the 3D sensing); “segmenting the depth data into a first segment, wherein the first segment is associated with a first region of the person” (Medioni, [0043] - After obtaining the global registration result, a local registration procedure can be performed for each body part (e.g., a limb or a leg). This can employ a cylindrical representation, as described further below. For example, the body of a subject can be represented as a set of rigid cylinders corresponding to the upper and lower arms, upper and lower legs, torso, neck and head. For increased details, additional cylinders can be added for the hands and feet as shown in FIG. 2A); “mapping the depth data of the first segment to a plurality of point clouds” (Medioni, [0044]-[0045] - The input to the non-rigid registration step can be N point clouds and corresponding transformation matrices computed by the global registration step. For local registration, individual body parts can be identified from the reference depth map that contains the frontal full body of a subject. Either a skeleton fit algorithm or a simple heuristic methods can be used to segment the body parts. For instance, a projected histogram of a depth map, with respect to the ground plane, allows one to detect the top of the head, the body center, and the center point between two legs… The reference point cloud D(0) is segmented into k body parts corresponding to the cylindrical representations, such as shown in FIG. 2A. Once the body parts are segmented, each vertex v(j) in the reference data can be classified as one of k body parts. These classified point sets can each be used as a reference body part R={r(1), r(2), . . . , r(k)}. Given a new point cloud Dh(j), Dh(j) is transformed into Dh'(0) using the stored transformation matrix T(j, 0), the nearest point q (in D'(0)) is computed from each vertex r(i) (in R) of the reference data, and the label of q(j) is assigned as the same as the label of r(i). If one has a set of segmented parts, a global registration method is applied on the each segmented part); “performing pairwise registration on the point clouds of the first segment” (Medioni, [0048]-[0054] - 1. Identify body parts from the reference data D(0); 2. Segment k body part regions R={r(1), r(2), . . . , r(k)}// a subset of D; 3. For each point could D.sub.h(i) [0051] Transform Dh(j) to Dh’(0) using T(i, 0); Segment k body parts Q={q(1), q(2), . . . q(k)} using R; For each part q(j); Compute local motion between q(j) and r(j): M(j) [0055] Transform q(j) to r(j) using M(j)); “segmenting the depth data into a second segment, wherein the second segment is associated with a second region of the person” (Medioni, [0043] - After obtaining the global registration result, a local registration procedure can be performed for each body part (e.g., a limb or a leg). This can employ a cylindrical representation, as described further below. For example, the body of a subject can be represented as a set of rigid cylinders corresponding to the upper and lower arms, upper and lower legs, torso, neck and head. For increased details, additional cylinders can be added for the hands and feet as shown in FIG. 2A); “mapping the depth data of the second segment to a plurality of point clouds” (Medioni, [0044]-[0045] - The input to the non-rigid registration step can be N point clouds and corresponding transformation matrices computed by the global registration step. For local registration, individual body parts can be identified from the reference depth map that contains the frontal full body of a subject. Either a skeleton fit algorithm or a simple heuristic methods can be used to segment the body parts. For instance, a projected histogram of a depth map, with respect to the ground plane, allows one to detect the top of the head, the body center, and the center point between two legs… The reference point cloud D(0) is segmented into k body parts corresponding to the cylindrical representations, such as shown in FIG. 2A. Once the body parts are segmented, each vertex v(j) in the reference data can be classified as one of k body parts. These classified point sets can each be used as a reference body part R={r(1), r(2), . . . , r(k)}. Given a new point cloud Dh(j), Dh(j) is transformed into Dh'(0) using the stored transformation matrix T(j, 0), the nearest point q (in D'(0)) is computed from each vertex r(i) (in R) of the reference data, and the label of q(j) is assigned as the same as the label of r(i). If one has a set of segmented parts, a global registration method is applied on the each segmented part); “performing pairwise registration on the point clouds of the second segment” (Medioni, [0048]-[0054] - 1. Identify body parts from the reference data D(0); 2. Segment k body part regions R={r(1), r(2), . . . , r(k)}// a subset of D; 3. For each point could D.sub.h(i) [0051] Transform Dh(j) to Dh’(0) using T(i, 0); Segment k body parts Q={q(1), q(2), . . . q(k)} using R; For each part q(j); Compute local motion between q(j) and r(j): M(j) [0055] Transform q(j) to r(j) using M(j)). It is noted that Medioni’s point cloud and depth data blending (Medioni, Figures 3A-3D, [0061]–[0073] - Depth map transformation and blending can be employed. The cylindrical representation allows a single continuous mesh to be built in a consistent way for different types of junctions. A critical point can be defined in the center of a junction, in where two or three cylindrical systems join, and separating plane(s) can be defined, which separate these cylindrical representations in the 3D space. Then, the overlapping area can be blended using a depth map transformation and simple filtering. This depth blending method can be used for many types of configurations, provided reasonable local cylindrical systems for different configurations) suggests any algorithm to merge the point clouds can be used, such as the claimed “merging the registered point clouds of the first and second segments to generate the partial 3D representation of the person” (see also algorithms for merging point clouds such as pairwise registration, JRMPC algorithm, … Evangelidis, Abstract - This paper addresses the problem of registering multiple point sets. Solutions to this problem are often approximated by repeatedly solving for pairwise registration, which results in an uneven treatment of the sets forming a pair: a model set and a data set. The main drawback of this strategy is that the model set may contain noise and outliers, which negatively affects the estimation of the registration parameters. In contrast, the proposed formulation (We will refer to this algorithm as joint registration of multiple point clouds (JRMPC)) treats all the point sets on an equal footing. Indeed, all the points are drawn from a central Gaussian mixture, hence the registration is cast into a clustering problem. We formally derive batch and incremental EM algorithms that robustly estimate both the GMM parameters and the rotations and translations that optimally align the sets. Moreover, the mixture’s means play the role of the registered set of points while the variances provide rich information about the contribution of each component to the alignment). Thus, it would have been obvious, in view of Evangelidis, to configure Medioni’s method as claimed by merging the first and second point clouds to generate the 3D representation of a person. The motivation is to combine different segments in form of point clouds to generate a 3D model of merged segments. Claim 2 adds into claim 1 “wherein segmenting the depth data into the first segment comprises: identifying the depth data associated with the torso and head region of the person by box-bounding” (Medioni, [0043] - After obtaining the global registration result, a local registration procedure can be performed for each body part (e.g., a limb or a leg). This can employ a cylindrical representation, as described further below. For example, the body of a subject can be represented as a set of rigid cylinders corresponding to the upper and lower arms, upper and lower legs, torso, neck and head. For increased details, additional cylinders can be added for the hands and feet as shown in FIG. 2A). It is noted that Medioni’s cylindrical representation implies a type of “box bounding” with “box” is represented by a cylindrical. Claim 3 adds into claim 2 “wherein mapping the depth data of the first segment to a plurality of point clouds comprises: filtering the depth data of the first segment; and generating point clouds based on the filtered depth data of the first segment” (Medioni, [0059] - Spatial smoothing can be performed to remove the noise inherent in the data capture stage using such low cost 3D cameras. For spatial filtering, a bilateral filter can be used, which can remove the noise while keeping the edges. This filtering process is fast thanks to the cylindrical representation of the model. If multiple temporal instances of a view are acquired, temporal smoothing can be performed, which can further reduce noise. For multiple observations, a running mean can be applied on the value of each pixel of the unwrapped cylindrical map 230. This temporal integration enables reduction of the intrinsic noise while aggregating the data). Claim 4 adds into claim 3 “wherein pairwise registration on the point clouds of the first segment is performed using the Interior Closest Point algorithm” (Medioni, [0028] - Iterative Closest Point (ICP) processes can be used on the transformed data D'(0) and D(0) to minimize the accumulated error caused by the relative motion computation steps. This procedure is summarized in Algorithm 1). Claim 5 adds into claim 1 “wherein pairwise registration on the point clouds of the first segment comprises: performing joint registration on the point clouds of the first segment” (Medioni, Figures 3A-3D, [0061]–[0073] - Depth map transformation and blending can be employed. The cylindrical representation allows a single continuous mesh to be built in a consistent way for different types of junctions. A critical point can be defined in the center of a junction, in where two or three cylindrical systems join, and separating plane(s) can be defined, which separate these cylindrical representations in the 3D space. Then, the overlapping area can be blended using a depth map transformation and simple filtering. This depth blending method can be used for many types of configurations, provided reasonable local cylindrical systems for different configurations). Claim 6 adds into claim 5 “wherein performing joint registration on the point clouds of the first segment comprises: initialising and selecting centroids of the depth data of the first segment; applying the Joint Registration of Multiple Point Clouds (JRMPC) algorithm based on the depth data associated with the selected centroids” which Medioni’s aligning point clouds (e.g., Medioni, Figures 3A-3D, [0061]–[0073] - Depth map transformation and blending can be employed. The cylindrical representation allows a single continuous mesh to be built in a consistent way for different types of junctions. A critical point can be defined in the center of a junction, in where two or three cylindrical systems join, and separating plane(s) can be defined, which separate these cylindrical representations in the 3D space. Then, the overlapping area can be blended using a depth map transformation and simple filtering. This depth blending method can be used for many types of configurations, provided reasonable local cylindrical systems for different configurations) suggests any well-known algorithm to aligning point clouds can be used such as a Joint Registration of Multiple Point Clouds (JRMPC) algorithm as claimed (it is a common knowledge that the JRMPC algorithm iteratively estimates both the transformation parameters (rotations and translations) for each individual point cloud and the Gaussian Mixture Model parameters (including the centroids) that best fit the transformed data) (Evangelidis, Abstract - This paper addresses the problem of registering multiple point sets. Solutions to this problem are often approximated by repeatedly solving for pairwise registration, which results in an uneven treatment of the sets forming a pair: a model set and a data set. The main drawback of this strategy is that the model set may contain noise and outliers, which negatively affects the estimation of the registration parameters. In contrast, the proposed formulation (We will refer to this algorithm as joint registration of multiple point clouds (JRMPC)) treats all the point sets on an equal footing. Indeed, all the points are drawn from a central Gaussian mixture, hence the registration is cast into a clustering problem. We formally derive batch and incremental EM algorithms that robustly estimate both the GMM parameters and the rotations and translations that optimally align the sets. Moreover, the mixture’s means play the role of the registered set of points while the variances provide rich information about the contribution of each component to the alignment). Claim 7 adds into claim 1 “wherein segmenting the depth data into the second segment comprises: identifying the depth data associated with left and right arms regions of the person by box-bounding” (Medioni, [0043] - After obtaining the global registration result, a local registration procedure can be performed for each body part (e.g., a limb or a leg). This can employ a cylindrical representation, as described further below. For example, the body of a subject can be represented as a set of rigid cylinders corresponding to the upper and lower arms, upper and lower legs, torso, neck and head. For increased details, additional cylinders can be added for the hands and feet as shown in FIG. 2A). It is noted that Medioni’s cylindrical representation implies a type of “box bounding” with “box” is represented by a cylindrical. Claim 8 adds into claim 7 “wherein segmenting the depth data into the second segment further comprises: spatio-temporally segmenting the identified depth data with the left and right arm regions” (Medioni, [0043] - After obtaining the global registration result, a local registration procedure can be performed for each body part (e.g., a limb or a leg). This can employ a cylindrical representation, as described further below. For example, the body of a subject can be represented as a set of rigid cylinders corresponding to the upper and lower arms, upper and lower legs, torso, neck and head. For increased details, additional cylinders can be added for the hands and feet as shown in FIG. 2A; [0069] - The critical point is in the intersection of two planes. For depth blending, an overlapping region can be defined around the separating plane ø1. When the arm area is segmented from the torso…). Claim 9 adds into claim 1 “wherein segmentation of the depth data into the second segment is based on the registered point clouds of the first segment” (Medioni, [0044]-[0045] - The input to the non-rigid registration step can be N point clouds and corresponding transformation matrices computed by the global registration step. For local registration, individual body parts can be identified from the reference depth map that contains the frontal full body of a subject. Either a skeleton fit algorithm or a simple heuristic methods can be used to segment the body parts. For instance, a projected histogram of a depth map, with respect to the ground plane, allows one to detect the top of the head, the body center, and the center point between two legs… The reference point cloud D(0) is segmented into k body parts corresponding to the cylindrical representations, such as shown in FIG. 2A. Once the body parts are segmented, each vertex v(j) in the reference data can be classified as one of k body parts. These classified point sets can each be used as a reference body part R={r(1), r(2), . . . , r(k)}. Given a new point cloud Dh(j), Dh(j) is transformed into Dh'(0) using the stored transformation matrix T(j, 0), the nearest point q (in D'(0)) is computed from each vertex r(i) (in R) of the reference data, and the label of q(j) is assigned as the same as the label of r(i). If one has a set of segmented parts, a global registration method is applied on the each segmented part). Claim 10 adds into claim 7 “wherein mapping the depth data of the second segment to a plurality of point clouds comprises: filtering the depth data of the second segment; and generating point clouds based on the filtered depth data of the second segment” (Medioni, [0059] - Spatial smoothing can be performed to remove the noise inherent in the data capture stage using such low cost 3D cameras. For spatial filtering, a bilateral filter can be used, which can remove the noise while keeping the edges. This filtering process is fast thanks to the cylindrical representation of the model. If multiple temporal instances of a view are acquired, temporal smoothing can be performed, which can further reduce noise. For multiple observations, a running mean can be applied on the value of each pixel of the unwrapped cylindrical map 230. This temporal integration enables reduction of the intrinsic noise while aggregating the data). Claim 11 adds into claim 7 “wherein pairwise registration on the point clouds of the second segment is performed using the Interior Closest Point algorithm” (Medioni, [0028] - Iterative Closest Point (ICP) processes can be used on the transformed data D'(0) and D(0) to minimize the accumulated error caused by the relative motion computation steps. This procedure is summarized in Algorithm 1). Claim 12 adds into claim 1 “wherein the depth data comprises a plurality of sequential depth frames, each depth frame comprising a plurality of depth pixels” (Medioni, [0022] - The sensor 100 can provide both a standard RGB image and a depth image containing the 3D information at 30 frames per second in Video Graphics Array (VGA) format; [0024] - FIG. 1B shows the basic posture 120 of a person for the 3D sensing). Claims 13-24, 25, and 26 claim a server and a method based on the method of claims 1-12; therefore, they are rejected under a similar rationale. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHU K NGUYEN whose telephone number is (571)272-7645. The examiner can normally be reached M-F 8-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F. Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHU K NGUYEN/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Oct 04, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602147
ZOOM ACTION BASED IMAGE PRESENTATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602874
FRAGMENTATION MODEL GENERATION METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602836
METHOD TO GENERATE DISPLACEMENT FOR SYMMETRY MESH
2y 5m to grant Granted Apr 14, 2026
Patent 12599485
SYSTEMS AND METHODS FOR ORTHOPEDIC IMPLANTS
2y 5m to grant Granted Apr 14, 2026
Patent 12597206
MECHANICAL WEIGHT INDEX MAPS FOR MESH RIGGING
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
93%
With Interview (+7.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 1184 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month