Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Priority
No foreign or domestic priority is claimed. The effective filing date of U.S. Application No. 18/737,565 is 06/07/2024.
Status of Claims
Claims 1–20 are pending in the application. Claims 1-20 are rejected.
Overview of Grounds of Rejection
Ground of Rejection
Claim(s)
Statute(s)
Reference(s)
1
1–20
§ 103
Seidel et al. (US20130330060A1) in view of Bogo et al. (NPL), Nelson et al. (US20110292051A1), and Wilf (US20150154453A1)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
(Please see the cited paragraphs, sections, pages, or surrounding text in the references for the paraphrased content.)
Ground of Rejection 1
Claims 1-20 are rejected under 35 U.S.C. § 103 as being unpatentable over Seidel et al. (US20130330060A1) in view of Bogo et al. (NPL), and further in view of Nelson et al. (US20110292051A1) and Wilf (US20150154453A1).
As per Claim 1, Seidel et al. teach the following portion of Claim 1, which recites:
“A computer implemented method for generating a personalized three-dimensional avatar model, comprising: receiving at least one photographic image of a persona;” Seidel et al. (US20130330060A1) teaches receiving photographic image content of a human subject as input:
“The inventive system takes as input a single-view or multi-view video sequence with footage of a human actor to be spatiotemporally reshaped (FIG. 2). There is no specific requirement on the type of scene, type of camera, or appearance of the background.” – Seidel et al., ¶ [0022]
A video sequence is composed of photographic frames. Thus, Seidel directly teaches receiving at least one photographic image (frame) of a persona.
Seidel et al. teach the following portion of Claim 1, which recites:
“preprocessing the received image to extract a body figure of a persona for further processing;”
Seidel et al. (US20130330060A1) teaches preprocessing that extracts a body figure (silhouette):
“In a first step, the silhouette of the actor in the video footage is segmented using off-the-shelf video processing tools.” – Seidel et al., ¶ [0022]
Segmenting the actor’s silhouette is preprocessing that extracts the persona’s “body figure” for downstream fitting/analysis.
Seidel et al. teach the following portion of Claim 1, which recites:
“generating pose data from the preprocessed image by identifying key points of the body figure mapped to a coordinate space;”
Seidel et al. (US20130330060A1) teaches key points in a coordinate space (“image plane”):
“The second component Ef measures the sum of distances in the image plane between feature points of the person tracked over time, and the re-projected 3D vertex locations of the model …” – Seidel et al., ¶ [0030]
Seidel uses feature points (key points) of the person and measures them in the image plane, which is a coordinate space. Those tracked key points provide pose-driving information derived from the preprocessed person region (the segmented silhouette/person).
Seidel alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Bogo et al. (NPL), they collectively teach some of the limitation(s).
Bogo teaches the following portion of Claim 1, which recites:
“loading a three-dimensional avatar model template, wherein the three-dimensional avatar model template is selected based on persona characteristics;”
Bogo et al. (NPL) teaches selecting among alternative 3D body model templates based on a persona characteristic (gender):
“Here we use one of three shape models: male, female, and gender-neutral. SMPL defines only male and female models. For a fully automatic method, we trained a new gender-neutral model … If the gender is known, we use the appropriate model.” – Bogo et al. (NPL), p. 6
Bogo loads a 3D body model and selects the template/model variant based on a persona characteristic (whether gender is known and which gender applies). This is a direct example of a 3D avatar-model “template” being selected based on persona characteristics.
Seidel teaches the following portion of Claim 1, which recites:
“aligning the three-dimensional avatar model template with the generated pose data to position the three-dimensional avatar model in a posture corresponding to the pose data;”
Seidel et al. (US20130330060A1) teaches aligning posture by fitting pose per frame:
“A marker-less motion capture approach may be used to fit the pose and shape of the body model to a human actor in each frame of a single view or multi-view video sequence …” – Seidel et al., ¶ [0027]
Fitting the model’s pose to match the actor per frame positions the 3D model in a posture corresponding to the observed pose evidence (including the tracked feature points and silhouette constraints).
Bogo teaches the following portion of Claim 1, which recites:
“performing gradient descent optimization, comprising:”
Bogo et al. (NPL) teaches an iterative, derivative-informed optimizer used to minimize the fitting objective:
“We minimize Eq. (1) using Powell’s dogleg method [31], using OpenDR and Chumpy [2,28]. Optimization for a single image takes less than 1 minute …” – Bogo et al. (NPL), p. 9 and Bogo further states: “Although
PNG
media_image1.png
30
25
media_image1.png
Greyscale
is not differentiable … we approximate its Jacobian by the Jacobian of the mode with minimum energy in the current optimization step.” – Bogo et al. (NPL), p. 8
The claim calls for “gradient descent optimization.” Bogo describes minimizing the objective using Powell’s dogleg method and discusses using a Jacobian during optimization steps. A POSITA would recognize dogleg as a standard gradient-informed/trust-region approach (using derivative information such as Jacobians/gradients) to iteratively reduce the loss, making it an obvious implementation choice for gradient-descent-type optimization in this fitting context.
Seidel teaches the following portion of Claim 1, which recites:
“calculating a loss function applied to projection of the aligned three-dimensional avatar model and the body figure of the preprocessed image,”
Seidel et al. (US20130330060A1) teaches an image-based error (loss) applied to the model’s projection, including silhouette alignment to the segmented person (body figure):
“The inventive motion capture scheme infers pose and shape parameters by minimizing an image-based error function … that … penalizes misalignment between the 3D body model and its projection into each frame …” – Seidel et al., ¶ [0029] and Seidel further explains the silhouette component: “The first component Es measures the misalignment of the silhouette boundary of the re-projected model with the silhouette boundary of the segmented person …” – Seidel et al., ¶ [0030]
Seidel’s image-based error function is a loss applied to the model’s projection into each frame, and it directly compares the re-projected model silhouette boundary to the segmented person silhouette boundary (the extracted “body figure”).
Seidel teaches the following portion of Claim 1, which recites:
“and adjusting at least one parameter of the three-dimensional avatar model parameter if the loss function value exceeds an accuracy threshold;”
Seidel et al. (US20130330060A1) teaches a threshold-based trigger tied to fitting error:
“Errors in the local optimization result manifest through a limb-specific fitting error E(Φt, Λt) that lies above a threshold. For global optimization, one may utilize a particle filter.” – Seidel et al., ¶ [0034]
Seidel indicates that when the fitting error is above a threshold, the system proceeds to additional optimization (global pose inference), which corresponds to adjusting model parameters when the loss/error exceeds a threshold.
Seidel and Bogo alone do not explicitly teach all the limitation(s) of the claim. However, when combined with Nelson et al., they collectively teach some of the limitation(s).
Nelson teaches the following portion of Claim 1, which recites:
“customizing the adjusted three-dimensional avatar model at an avatar customization unit to include user-specific features;”
Nelson et al. (US20110292051A1) teaches customization via an editing environment:
“The avatar can be further customized by the individual in an editing environment and used in various applications …” – Nelson et al., ¶ [0004]
Nelson’s editing environment is an avatar customization unit that supports adding or modifying user-specific features after the avatar is created.
Seidel, Bogo, and Nelson alone do not explicitly teach all the limitation(s) of the claim. However, when combined with Wilf, they collectively teach all of the limitation(s).
Wilf teaches the following portion of Claim 1, which recites:
“storing the personalized three-dimensional avatar model in a storage unit for subsequent retrieval and use.”
Wilf (US20150154453A1) teaches storing user-specific 3D and appearance data in a database, and also teaches that such 3D shape information is used to build an avatar:
“the system further comprises a user shape and appearance database for storing 3D size measurements of the user body together with the user appearance data.” – Wilf, ¶ [0026] and Wilf further states: “the 3D shape information is further used to build an avatar that best describes the user's body shape … [and] user appearance information … is embedded into the avatar …” – Wilf, ¶ [0070]
Wilf stores the user-specific 3D and appearance parameters in a database and teaches using that 3D shape information to build an avatar with embedded appearance features. Storing these avatar-defining parameters in the database effectively stores the personalized 3D avatar model in a form suitable for later retrieval and use (for example, reconstruction/rendering of the avatar).
Before the effective filing date of the claimed invention, a person of ordinary skill in the art (POSITA) would have been motivated to combine Seidel et al. (US20130330060A1) with Bogo et al. (NPL) because Seidel clearly defines a projection-based image-based error function with a silhouette-driven term comparing the re-projected model to the segmented person and describes threshold-triggered escalation when fitting error lies above a threshold, while Bogo teaches a well-known derivative-informed optimization implementation for minimizing such objectives, including use of Powell’s dogleg method and Jacobian-based steps during iterative fitting. A POSITA would further incorporate Nelson et al. (US20110292051A1) to provide a user-facing editing environment where the generated avatar can be further customized, improving system usability with predictable results. A POSITA would also incorporate Wilf (US20150154453A1) to persist the personalized output by storing 3D size measurements together with user appearance data in a database, where Wilf also teaches using the stored 3D shape information to build an avatar with embedded appearance features, thereby enabling later retrieval and reuse. The combined system yields predictable improvements: template selection based on persona characteristics, posture alignment through model fitting to image evidence, iterative optimization to reduce projection loss (with threshold-based refinement), customization, and storage for subsequent retrieval and use.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
As per Claim 2, Seidel alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Wilf, they collectively teach all of the limitation(s).
Wilf teaches the limitation(s) of Claim 2, which recites:
“The method of claim 1, wherein preprocessing the received image comprises a background removal operation to isolate the body figure of the persona.”
Wilf (US20150154453A1) teaches preprocessing that performs background removal (background subtraction) to isolate the foreground body figure:
“...the user may be asked to exit the scene, to facilitate background learning... Once the background model is stable... for each newly captured video frame, a background subtraction module ... computes the pixel-by-pixel absolute difference between the video frame ... and the background model image... [to] obtain a binary image... [and] ... eliminate small noise regions and small holes inside the object.” — Wilf, ¶ [0088]
Wilf’s background subtraction module removes the background (via difference from a learned background model) and produces a binary image representing the foreground “object,” which directly corresponds to isolating the body figure of the persona during preprocessing.
Before the effective filing date of the claimed invention, a POSITA would have combined Seidel et al. (US20130330060A1) with Wilf (US20150154453A1) because Seidel relies on a segmented person silhouette for fitting (Seidel et al., ¶ [0022]), and Wilf teaches a concrete background subtraction module that removes background and outputs a binary image of the foreground object (Wilf, ¶ [0088]), yielding predictable improvements in silhouette isolation and downstream fitting robustness.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
As per Claim 3, Seidel teaches the limitation(s) of Claim 3 which recites:
“The method of claim 1, wherein preprocessing the received image comprises defining the body figure contour of the persona.”
Seidel et al. (US20130330060A1) teaches defining the contour (boundary) of the body figure as part of the fitting pipeline:
“The first component Es measures the misalignment of the silhouette boundary of the re-projected model with the silhouette boundary of the segmented person.” — Seidel et al., ¶ [0030]
The silhouette boundary of the segmented person is the body-figure contour, so Seidel directly supports preprocessing that defines the body figure contour.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
As per Claim 4, Seidel alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Nelson, they collectively teach all of the limitation(s).
Nelson teaches the limitation(s) of Claim 4, which recites:
“The method of claim 1, wherein preprocessing the received image includes performing at least one image modification of resizing the image, cropping the image, and applying color filtering to the image.”
Nelson et al. (US20110292051A1) teaches preprocessing that includes resizing and color filtering / color processing:
“In some implementations, first stage 200 can include the following image processing modules: resizing 202, color space conversion 204, … All or some of the processing modules in first stage 200 can be applied to input image 104 to prepare input image 104 for further processing …” — Nelson et al., ¶ [0021] “Input image 104 can be processed by resizing module 202, which can downsample input image 104 to a lower resolution …” — Nelson et al., ¶ [0022] “Input image 104 can be processed by color space conversion module 204, which can convert input image 104 from a first color space to a second color space … for example … RGB … to … HSV …” — Nelson et al., ¶ [0023]
Nelson’s preprocessing performs at least one of the claimed image modifications, namely resizing the image and color processing (color space conversion), which satisfies Claim 4’s “at least one” requirement.
Before the effective filing date of the claimed invention, a POSITA would have been motivated to incorporate Nelson’s resizing/downsampling preprocessing step into Seidel’s video processing pipeline. Because Seidel processes single-view or multi-view video sequences to extract a silhouette (which can be computationally expensive for high-resolution frames), a POSITA would logically look to standard image preparation techniques, such as Nelson's resizing module, to downsample the images to a lower resolution prior to segmentation. This combination provides the predictable result of reducing computational load and increasing processing speed without fundamentally altering Seidel’s silhouette extraction logic.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
As per Claim 5, Seidel alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Bogo, they collectively teach all of the limitation(s).
Bogo teaches the limitation(s) of Claim 5, which recites:
“The method of claim 1, wherein loading the three-dimensional avatar model template is based on the pose data.”
Bogo et al. (NPL) teaches loading/instantiating the 3D body model with an initial state based on pose data (predicted 2D joints), including a conditional orientation choice driven by joint geometry:
“We assume that camera translation and body orientation are unknown… We initialize the camera translation… via… the torso length… and the predicted 2D joints… To address this, we try two initializations when the 2D distance between the CNN-estimated 2D shoulder joints is below a threshold: first with body orientation estimated as above and then with that orientation rotated by 180 degrees.” — Bogo et al. (NPL), Sec. 3.3, p. 9
In a 3D fitting pipeline, “loading” a model template into the working coordinate space necessarily includes instantiating it with an initial transform/state (for example, translation and orientation). Bogo conditions that initial instantiation on pose data (the predicted 2D joints and the shoulder-joint distance threshold), so the model is loaded/instantiated based on the pose data.
Before the effective filing date of the claimed invention, a POSITA combining Seidel’s per-frame 3D body fitting (Seidel et al., ¶ [0027]) with Bogo’s pose/shape fitting would have found it obvious to load/instantiate the 3D model with an initial orientation based on pose data (Bogo et al. (NPL), Sec. 3.3, p. 9) to reduce ambiguity (for example, front/back flips) and achieve predictable improvements in initialization and convergence.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
As per Claim 6, Seidel alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Bogo, they collectively teach all of the limitation(s).
Bogo teaches the limitation(s) of Claim 6, which recites:
“The method of claim 1, wherein loading the three-dimensional avatar model template is based on at least one persona characteristic including gender, weight, height, or nationality.”
Bogo et al. (NPL) teaches loading/selecting the 3D model template based on gender:
“SMPL is gender-specific; i.e. it distinguishes the shape space of females and males. To make our method fully automatic, we introduce a gender-neutral model. If we do not know the gender, we fit this model to images. If we know the gender, then we use a gender-specific model for better results.” — Bogo et al. (NPL), p. 2
Bogo selects/uses a gender-specific (or gender-neutral) 3D body model depending on whether gender is known, which satisfies “loading the three-dimensional avatar model template is based on at least one persona characteristic,” namely gender.
Before the effective filing date of the claimed invention, a POSITA would have combined Seidel et al. (US20130330060A1) with Bogo et al. (NPL) because Seidel fits a 3D body model to image evidence (Seidel et al., ¶ [0027]) and Bogo teaches choosing a gender-specific (or gender-neutral) body model if the gender is known (Bogo et al. (NPL), p. 2), yielding predictable improvements in fit accuracy by using a template matched to a persona characteristic.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
As per Claim 7, Seidel teaches the limitation(s) of Claim 7, which recites:
“The method of claim 1, wherein the loss function evaluates the disparity in areas covered by the body figure in the preprocessed image and areas covered by the projection of the three-dimensional avatar model.”
Seidel et al. (US20130330060A1) teaches a silhouette-based loss that measures mismatched covered areas (pixels) between the image body figure and the model’s projected silhouette:
“FIGS. 4(a)-4(d) show components of the pose error function… 4(c) silhouette error term used during global optimization; a sum of image silhouette pixels not covered by the model, and vice versa …” — Seidel et al., ¶ [0015]
The “sum of image silhouette pixels not covered by the model, and vice versa” is a direct area-disparity measure between (i) the body figure area in the preprocessed image (the image silhouette) and (ii) the area covered by the model’s projected silhouette.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
As per Claim 8, Seidel teaches the limitation(s) of Claim 8, which recites:
“The method of claim 3, wherein the loss function compares the contours of the body figure in the preprocessed images with contours of the projection of the three-dimensional avatar model.”
Seidel et al. (US20130330060A1) teaches a loss term that compares contours (boundaries) of the segmented person (body figure) with the contours of the projected model:
“The first component Es measures the misalignment of the silhouette boundary of the re-projected model with the silhouette boundary of the segmented person …” — Seidel et al., ¶ [0030]
The silhouette boundary is the body-figure contour, and the re-projected model provides the model’s projected contour, so Seidel’s Es directly compares the two contours via a loss term.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
As per Claim 9, Seidel alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Bogo, they collectively teach all of the limitation(s).
Bogo teaches the limitation(s) of Claim 9, which recites:
“The method of claim 1, wherein the gradient descent optimization is performed in a cycle until the loss function value falls below the accuracy threshold.”
Bogo teaches iterative, repeated optimization steps (“cycle”) while minimizing the objective:
“After estimating camera translation, we fit our model by minimizing Eq. (1) in a staged approach … and gradually decreasing them in the subsequent optimization stages …” — Bogo et al. (NPL), Sec. 3.3, p. 9
“We minimize Eq. (1) using Powell’s dogleg method …” — Bogo et al. (NPL), Sec. 3.3, p. 9
Powell’s dogleg is a standard iterative, gradient-informed trust-region optimizer. A POSITA would understand that such minimization proceeds in repeated steps (cycles) and terminates when a convergence criterion is met, such as the objective (loss) becoming sufficiently small or its improvement falling below a tolerance, which corresponds to the claimed “loss function value falls below the accuracy threshold.”
Seidel confirms that fitting error is evaluated against a threshold in this context:
“Errors … manifest through a limb-specific fitting error … that lies above a threshold.” — Seidel et al., ¶ [0034]
Seidel shows threshold-based acceptability checks are used in this same model-fitting domain; applying a threshold as the stopping criterion for Bogo’s iterative dogleg minimization is a straightforward, predictable design choice.
Before the effective filing date of the claimed invention, a POSITA implementing Seidel-style model fitting would have used Bogo’s iterative dogleg minimization “in a staged approach” (Bogo et al. (NPL), Sec. 3.3, p. 9) with a threshold-based convergence/acceptance criterion, consistent with Seidel’s use of an error that “lies above a threshold” (Seidel et al., ¶ [0034]), to obtain predictable improvements in reliable termination and computational efficiency.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
As per Claim 10, Seidel alone does not explicitly teach all the limitation(s) of the claim. However, when combined with Nelson, they collectively teach all of the limitation(s).
Seidel and Nelson teach the limitation(s) of Claim 10, which recites:
“The method of claim 1, further comprising identifying parameters of the three-dimensional avatar model that cannot be adjusted within predefined limits to obtain an acceptable loss function value, and guiding the user to either customize the identified three-dimensional avatar model parameters or to upload an additional image to improve the avatar model.”
Seidel et al. (US20130330060A1) teaches identifying the “incorrectly fitted” sub-parts (parameters) based on a thresholded fitting error, and providing user guidance on which parameters to hold fixed versus free (predefined limits) during editing:
“One may therefore perform global pose optimization only for those sub-chains of the kinematic model, which are incorrectly fitted. Errors in the local optimization result manifest through a limb-specific fitting error … that lies above a threshold.” — Seidel et al., ¶ [0034] “One may give the user control over this decision and give him the possibility to fix or let free certain attribute dimensions when performing an edit. To start with, for any attribute value the reshaping interface provides reasonable suggestions of what parameters to fix when modifying certain attributes individually.” — Seidel et al., ¶ [0040]
Seidel detects which model portions (sub-chains / limb-related parameters) are problematic using a fitting error that lies above a threshold (acceptable-loss concept) and then guides the user via an interface that provides reasonable suggestions of what parameters to fix and lets the user fix or let free dimensions (predefined adjustment limits), which corresponds to identifying the relevant parameters and guiding user customization.
Seidel et al. (US20130330060A1) further teaches user-guided intervention/customization during difficult fits:
“In difficult poses, the user may support the algorithm with manual constraint placement. Once the 3D model is tracked, the user may interactively modify its shape attributes.” — Seidel et al., ¶ [0008]
Nelson et al. (US20110292051A1) teaches guiding capture of additional images/frames to improve matching of the avatar model:
“The successive video frames can be submitted to the genetic process to refine the search for the best matching avatar model.” — Nelson et al., ¶ [0062] “To improve the matching … a head guide can be used during the image capture step … while successive images are captured …” — Nelson et al., ¶ [0065]
Seidel provides guidance for user-driven customization (for example, manual constraint placement / interactively modify attributes), and Nelson provides guidance for capturing successive images during the image capture step to improve the matching (uploading an additional image to improve the avatar model).
Before the effective filing date of the claimed invention, a POSITA would have been motivated to combine Seidel et al. (US20130330060A1) with Nelson et al. (US20110292051A1) to handle cases where Seidel’s fitting error indicates an unacceptable fit by identifying the problematic model portions using a limb-specific fitting error that lies above a threshold (Seidel et al., ¶ [0034]) and then guiding remediation either through user-directed parameter control (letting the user fix or let free certain attribute dimensions and providing reasonable suggestions of what parameters to fix) (Seidel et al., ¶ [0040]) or through acquisition of additional image evidence, since Nelson teaches using a guide during the image capture step while successive images are captured to improve the matching and refine the best matching avatar model (Nelson et al., ¶¶ [0062], [0065]); this combination yields predictable results by improving convergence and fit quality when parameter adjustments within normal bounds are insufficient.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
System Claim 11 does not include any additional limitations that would significantly distinguish it from method claim 1. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
System Claim 12 does not include any additional limitations that would significantly distinguish it from method claim 2. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
System Claim 13 does not include any additional limitations that would significantly distinguish it from method claim 3. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
System Claim 14 does not include any additional limitations that would significantly distinguish it from method claim 4. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
System Claim 15 does not include any additional limitations that would significantly distinguish it from method claim 5. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
System Claim 16 does not include any additional limitations that would significantly distinguish it from method claim 6. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
System Claim 17 does not include any additional limitations that would significantly distinguish it from method claim 7. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
System Claim 18 does not include any additional limitations that would significantly distinguish it from method claim 8. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
System Claim 19 does not include any additional limitations that would significantly distinguish it from method claim 9. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
System Claim 20 does not include any additional limitations that would significantly distinguish it from method claim 10. Therefore, it is likewise rejected under 35 U.S.C. § 103 in view of the same references and for the same reasons set forth above.
PNG
media_image2.png
13
460
media_image2.png
Greyscale
Conclusion
The prior art made of record and relied upon in this action is as follows:
Patent Literature:
Wilf (US20150154453A1) — “System and method for deriving accurate body size measures from a sequence of 2D images.”
Nelson et al. (US20110292051A1) — “Automatic Avatar Creation.”
Seidel et al. (US20130330060A1) — “Computer-implemented method and apparatus for tracking and reshaping a human shaped figure in a digital world video.
Non-Patent Literature (NPL):
Bogo et al. (NPL) — “Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image”, 2016-07-27. Available at: [https://arxiv.org/pdf/1607.08128]
Note: A PDF copy of each NPL reference is attached with this Office Action. URLs are included
for applicant convenience. If a link becomes unavailable in the future, the citation information may be used to locate the reference or access archived versions via the Wayback Machine.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed as follows:
Patent Literature:
(none)
Non-Patent Literature (NPL):
(none)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADEEL BASHIR whose telephone number is (571) 270-0440. The examiner can normally be reached Monday-Thursday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on (571) 276-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ADEEL BASHIR/
Examiner, Art Unit 2616
/DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616