DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This is in response to applicant's amendment/response filed on 12/19/2025,
which has been entered and made of record. Claims 1, 11-12, and 14 have been amended. No claims have been cancelled or newly added. Claims 1-15 are pending in the application.
Response to Arguments
Applicant’s arguments, see Remarks Pages 8-13, filed 12/19/2025, with respect to the rejection(s) of claim(s) 1, 11 and 12 under 35 U.S.C. 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
The arguments regarding dependent claims for the virtue of their dependency are moot because the independent claims are not allowable.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: video input module, feature detector module, 3D morphable model module, optical flow module, and drift correction module in claim 12; and drift correction module in claim 15.
Claim 12, “video input module configured to receive successive input image frames from a sequence of input image frames comprising an object to track…” The corresponding structure in the disclosure for performing the claimed module function of being configured to receive successive input, “data port60 or A/V port90 of entertainment device10, or a prerecorded source such as optical drive70 or data drive50, in conjunction with CPU20 and/or GPU30” (paragraph 110). Therefore, the interpretation of the “module…configured to…” is a data port, A/V port, optical drive, or data drive and equivalents thereof when used in conjunction with a CPU and/or GPU.
Claim 12, “a feature detector module configured to detect a plurality of…”, “a 3D morphable model module configured to map…”, “an optical flow module configured to identify…”, “a drift correction module configured to correct…” The corresponding structure in the disclosure for performing the claimed module function of being configured to…, “(e.g. CPU20 and/or GPU30) configured (for example by suitable software instruction)” (paragraphs 111-114). Therefore, the interpretation of the “module…configured to…” is a CPU and/or GPU programmed with the corresponding algorithm(s) to perform the respective functions and equivalents thereof.
Claim 15, “drift correction module is configured to correct…” The corresponding structure in the disclosure for performing the claimed module function of being configured to…, “(e.g. CPU20 and/or GPU30) configured (for example by suitable software instruction)” (paragraph 114). Therefore, the interpretation of the “module…configured to…” is a CPU and/or GPU programmed with the corresponding algorithm(s) to perform the respective functions and equivalents thereof.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-11 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 11 recite the limitations: "receiving successive input image frames" in lines 2 and 4 (respectively); “between successive input image frames” in lines 10-11 and 12-13 (respectively); and "the successive input image frames" in last two lines of each claim. There is insufficient antecedent basis for this limitation in the claim. This is because it is unclear whether the successive input image frames recited above are the same instances or a new instance of successive input image frames, and then when referred to as “the” successive input image frames, it’s further unclear which instance is being referred to if there are multiple instances.
Claims 2-10 rejected under 35 U.S.C. 112(b) since they depend on a claim that is rejected under rejected under 35 U.S.C. 112(b).
Note. Most likely these claims depend on some dependent claim or are missing elements.
In order to fix this issue, dependency should be reviewed and any first instance of an element
should be made clear that it’s a first instance and should be referred to as “a” or “an” instead of
“the”, and if multiple instances exist, further instances should be further distinguished for example by saying “first”, “second”, and/or “third” etc.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5, 7, 10, 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (U.S. Patent Application Publication No. 2021/0104086), hereinafter referenced as Wang, in view of Arumugam (U.S. Patent No. 10,540,817), hereinafter referenced as Arumugam, Zhe et al. (U.S. Patent Application Publication No. 2021/0241521), hereinafter referenced as Zhe, Sachs et al. (U.S. Patent Application Publication No. 2017/0069056), hereinafter referenced as Sachs and Beeler et al. (U.S. Patent Application Publication No. 2017/0091529), hereinafter referenced as Beeler.
Regarding claim 1, Wang teaches a method of point tracking, comprising the steps of: receiving successive input image frames from a sequence of input image frames comprising an object to track (paragraph 114 teaches an embodiment as a method and paragraph 52 teaches sequential input frames being received with each including an object); for each input image frame, executing an optical flow module to identify a plurality of flow points (paragraph 57 teaches "As shown, optical flow module 714 may evaluate sub-regions including a representation of a human face to generate such optical flow data between input images 702, 703 as is known in the art. The resultant optical flow data are used to establish pixel matches between rendered output image 709 and rendered image 710"); performing optical flow tracking of the plurality of flow points between successive input image frames (Paragraph 57 teaches how optical flow data between input images is generated). However, Wang fails to teach for each input image frame, executing a feature detector to detect a plurality of feature points; and mapping a 3D morphable model (3DMM) along a plurality of 3DMM points that are projected to correspond to the plurality of feature points; and correcting, based at least in part on a comparison between the plurality of feature points and the plurality of 3DMM points for each input image frame, optical flow tracking for at least a first flow point position responsive to the mapped 3D morphable model.
However, Arumugam teaches for each input frame, executing a feature detector to detect a plurality of feature points (Arumugam, col. 3, lines 27-31 teach identifying feature points of the input image) and mapping a 3D morphable model (3DMM) along a plurality of 3DMM points that are projected to correspond to the plurality of feature points (Arumugam, col. 3, lines 34-37 teach mapping feature points of scaled principal regions with feature points of a 3DMM). Arumugam is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of mapping a 3D model to feature points. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s invention to incorporate the teachings of Arumugam so that a realistic appearance of the input 2D image is provided in a 3D view space (Arumugam, col. 7, lines 26-28). A more realistic appearance would lead to a more accurate alignment of points.
However, the combination of Wang and Arumugam fails to explicitly teach correcting, based at least in part on a comparison between the plurality of feature points and the plurality of 3DMM points for each input image frame, optical flow tracking for at least a first flow point position responsive to the mapped 3D morphable model.
However, Zhe teaches correcting, based at least in part on a comparison between the plurality of feature points and the plurality of 3DMM points for each input image frame, optical flow tracking for at least a first flow point position responsive to the mapped 3D morphable model (Zhe, paragraph 56 teaches an optical flow map determined through 3D morphable models and paragraphs 89-90 teach to implement optical flow completions and correction). Zhe is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of face image generation using optical flow and correction of such. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang, Arumugam, and Zhe with the correction of optical flow techniques of Zhe to generate a realistic and natural face image (Zhe, paragraph 89). This would ensure a better user experience by preserving key identity features.
However, the combination of Wang, Arumugam and Zhe fails to explicitly teach based at least in part on a comparison between the plurality of feature points and the plurality of 3DMM points for each input image frame.
However, Sachs teaches based at least in part on a comparison between the plurality of feature points and the plurality of 3DMM points for each input image frame (Sachs, paragraph 41 teaches “The identified facial key points 302 are compared against a 3D facial mesh 504”…“The 3D facial mesh 504 contains facial landmarks 506”… “The 3D facial mesh 504 may be rotated, translated, and/or deformed in order to better fit the locations of the facial landmarks 506 to the locations of the identified facial key points 302.”); this shows the facial key points/feature points being compared against the 3D facial mesh landmarks/3DMM points. Sachs is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of warping an image to generate a new image by using comparison of meshes and key/feature points. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang, Arumugam and Zhe with the comparison of feature points to 3DMM/3d mesh points techniques of Sachs to ensure correcting distortion in an input image to improve its perceptual quality (Sachs, Abstract). This would be done due to the better fitting from the comparison of the two sets of points.
However, the combination of Wang, Arumugam, Zhe and Sachs fails to explicitly teach wherein correcting includes enforcing anatomical constraints determined by the 3D morphable model to maintain anatomically valid spatial relationships among the plurality of flow points across the successive input image frames (Although, Wang, paragraph 50 teaches "a constraint function to measure the variation of morphable model parameter vector 605 from a mean or median and/or to measure violation of constraints of expected parameter values of morphable model parameter vector 605 expressed as a scalar value" and Sachs, paragraph 42 teaches "energy minimization function 508 has the facial landmarks 506 with locations having a close correspondence to the locations of the identified facial key points 302 as well as a close correspondence to a constraining subspace. The constraining subspace is created through previous sampling of facial features in a plurality of images, and ensures that deformed 3D vertex locations maintain a similarity to an actual human face"; both of which show and/or imply constraints used by the 3D model to keep a valid spatial relationship for facial/anatomical features.
However, Beeler explicitly teaches wherein correcting includes enforcing anatomical constraints determined by the 3D morphable model to maintain anatomically valid spatial relationships among the plurality of flow points across the successive input image frames (Beeler, paragraph 87 teaches “flow from one camera image to another camera image may be computed. Therefore, optical flow may be used to find correspondences of facial features between the different captured images. The term E.sub.O is referred to as the overlap constraint, which is a spatial regularization term to enforce neighboring patches to agree with each other wherever they have shared vertices. The term E.sub.A is the anatomical constraint, which ensures that patches remain plausibly connected with the bone structure. The term E.sub.T is a temporal regularization term, which ensures smooth change over time”); since this uses optical flow, this shows the correcting of optical flow from the above combination would include anatomical constraints (determined by 3D morphable model since are for patches of such) and this keeps valid spatial relationships among flow points since it has spatial regularization as well as keeping patches connected to bone structure. Beeler is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of anatomical constraints to maintain spatial relationships during optical flow tracking. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang, Arumugam, Zhe and Sachs’ invention with the anatomical constraints techniques of Beeler to significantly improves the depth reconstruction of the face (Beeler, paragraph 92). This would be due to anatomical constraint causing better spatial attributes and ultimately leading to an improved user experience.
Regarding claim 5, the combination of Wang, Arumugam, Zhe, Sachs and Beeler teaches wherein the object comprises at least one of: i. a face; and ii. a body (Wang, paragraph 52 and fig. 5 teaches the object being a human face).
Regarding claim 7, the combination of Wang, Arumugam, Zhe, Sachs and Beeler teaches calibrating the 3D morphable model to anatomic proportions of a person depicted within the input image frames (Arumugam, col. 9, lines 39-61 teaches by combining different portions of different meshes, the 3D morphable model is calibrated to a person’s unique features). The same motivations used in claim 1 apply here in claim 7.
Regarding claim 10, the combination of Wang, Arumugam, Zhe, Sachs and Beeler teaches further comprising: for a selected input image frame, outputting at least one of: i. corrected optical flow tracking data; ii. expression parameters corresponding to the 3D morphable model; and iii. feature point data (Arumugam, col. 12, lines 23-29 teach the software components that creates 3D objects may reside in cloud based computing architecture and devices communicates by sending the 2D frontal face image and receives the 3D models from the cloud). If 2D image is sent as input, and the cloud sends back to the device, this is effectively "outputting" the 3D model, which is inclusive of mapped feature points and expression parameters as shown in fig. 3. The same motivations used in claim 1 apply here in claim 10.
Regarding claim 11, the non-transitory computer-readable medium claim is similar to
method claims 1 and thus is rejected under similar rationale (Wang, paragraph 114 teaches an embodiment as a non-transitory computer-readable medium).
Regarding claim 12, the combination of Wang, Arumugam, Zhe, Sachs and Beeler teaches a point tracking system, comprising: a video input module (Wang, paragraph 24 and fig. 10 teaches a system which may be implemented with a camera 1004). Wang also teaches a feature detector module, 3D morphable model module, optical flow module, and drift correction module (Wang, paragraph 93 and fig. 10 exhibits a CPU). All of these modules are configured to perform all of the respectively claimed functions after the CPU is programmed with instructions (Wang, paragraph 88 and 89 teach programming the modules with code and/or instructions). Moreover, the system claim 12 is similar to method claim 1, and thus is rejected under similar rationale.
Claim(s) 2 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wang, Arumugam, Zhe, Sachs and Beeler as applied to claims 1 and 12 above, and further in view of Che et al. (U.S. Patent Application Publication No. 2020/0097729), hereinafter referenced as Che.
Regarding claims 2, the combination of Wang, Arumugam, Zhe, Sachs and Beeler fails to teach that the plurality of flow points comprise at least a portion of the plurality of feature points so detected. However, Che teaches that the plurality of flow points comprise at least a portion of the plurality of feature points so detected (Che, paragraph 78 teaches optical flow points being a subset of feature points). Flow points being a subset of feature points means some flow points can be the same as or comprise of feature points. Che is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of using optical flow points to track motion of video frames. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang, Arumugam, Zhe, Sachs and Beeler to incorporate the teachings of Che so that the position of the moving object can be calculated without knowing the information of any scene (Che, paragraph 77).
Regarding claim 13, the system claim is similar to method claim 2, and thus is rejected under similar rationale.
Claim(s) 3, 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wang, Arumugam, Zhe, Sachs and Beeler as applied to claims 1 and 12 above, and further in view of Pan et al. (U.S. Patent Application Publication No. 2023/0154236), hereinafter referenced as Pan.
Regarding claim 3, the combination of Wang, Arumugam, Zhe, Sachs and Beeler fails to teach wherein, the plurality of feature points are detected using at least one machine learning model.
However, Pan teaches that wherein, the plurality of feature points are detected using at least one machine learning model (Pan, paragraph 51 teaches using a plurality of machine learning models for respective visual features of a face). Pan is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of detecting facial feature and feature points. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang, Arumugam, Zhe, Sachs and Beeler to incorporate the teachings of Pan since machine learning is commonly known in the art to analyze large amounts of data with accurateness hence it would result in a more accurate feature point determination.
Regarding claim 4, the combination of Wang, Arumugam, Zhe, Sachs, Beeler and Pan discloses wherein the machine learning model is trained to detect a visual feature of the object, and a group of the feature points, out of the plurality of feature points corresponding to the visual feature is detected by the machine learning model (Pan, paragraph 51 teaches using a plurality of machine learning models for respective visual features of a face). The same motivations used in claim 3 apply here in claim 4.
Regarding claim 14, the combination of Wang, Arumugam, Zhe, Sachs, Beeler and Pan discloses wherein the feature detector module is configured to detect the plurality of feature points using at least one machine learning module trained to detect a visual feature of the object, and a group of feature points out of the plurality of feature points corresponding to the visual feature is detected by the machine learning model (Pan, paragraph 51 teaches using a plurality of machine learning models for respective visual features of a face). The same motivations used in claim 3 apply here in claim 14.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wang, Arumugam, Zhe, Sachs and Beeler as applied to claim 5 above, and further in view of Seo et al. (Compression and Direct Manipulation of Complex Blendshape Models), hereinafter referenced as Seo.
Regarding claim 6, the combination of Wang, Arumugam, Zhe, Sachs and Beeler fails to disclose wherein the 3D morphable model is a linear blendshape-based model.
However, Seo teaches that a 3D morphable model is a linear blendshape-based model (Seo, see figure 1 and 1 Introduction) and blendshapes themselves are 3D models with the morphing meaning that the blendshapes here are morphable models (Seo, fig. 7 and section 4.1 exhibit blendshapes as morphable models). Seo is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of manipulation of blendshape models. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang, Arumugam, Zhe, Sachs and Beeler to incorporate the teachings of Seo to be a linear blendshape model because linear blendshape models have simple and interpretable parameterization which can then allow the models to shape exactly as desired (Seo, see 1 Introduction).
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wang, Arumugam, Zhe, Sachs and Beeler as applied to claim 1 above, and further in view of Molyneaux (U.S. Patent Application Publication No. 2020/0410753), hereinafter referenced as Molyneaux, and Li et. al. (U.S. Patent No. 9,317,954), hereinafter reference as Li.
Regarding claim 8, the combination of Wang, Arumugam, Zhe, Sachs and Beeler fails to teach the successive input image frames comprise stereoscopic image pairs; and the method further comprising: generating a depth map from each one of the stereoscopic image pair; mapping the plurality of feature points so detected to corresponding depth positions; and mapping the 3D morphable model to the plurality of feature points at the mapped depth positions. However, Molyneaux teaches the successive input image frames comprise stereoscopic image pairs (Molyneaux, paragraph 105 teaches stereo images); and the method further comprising: generating a depth map from each one of the stereoscopic image pair (Molyneaux, paragraph 112 teaches generating stereo depth information from a stereo camera). Molyneaux is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang, Arumugam, Zhe, Sachs and Beeler to incorporate the teachings of Molyneaux so that accurate interaction of the physical world to the display is provided (Molyneaux, paragraph 61) and so that a depth map can be generated as fast as a depth sensor can form new images (Molyneaux, paragraph 62). However, the combination of Wang, Arumugam, Zhe, Sachs, Beeler and Molyneaux fails to teach mapping the plurality of feature points so detected to corresponding depth positions; and mapping the 3D morphable model to the plurality of feature points at the mapped depth positions.
However, Li teaches mapping the plurality of feature points so detected to corresponding depth positions (Li, col. 7, lines 33-38 teach finding the correspondence between vertices on each mesh of a morphable face model to the depth map); and mapping the 3D morphable model to the plurality of feature points at the mapped depth positions. (Li, col. 7, lines 64-67 and col. 8, lines 1-3 both teach fitting of the 3D morphable model by using point-to-plane constraints on input [depth] data as well as point-to-point constraints on features). Li is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of techniques of facial capture, tracking body deformations and fitting input data using blendshapes. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of the combination of Wang, Arumugam, Zhe, Sachs, Beeler and Molyneaux to incorporate the teachings of Li so that the fitting can include optimization that solves for global rigid transformation leading to a neutral model being tailored to the subject (Li, col. 7, 51-56).
Claim(s) 9 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wang, Arumugam, Zhe, Sachs and Beeler as applied to claims 1 and 12 above, and further in view of Deng et al. (U.S. Patent Application Publication No.2022/0237862), hereinafter referenced as Deng.
Regarding claim 9, the combination of Wang, Arumugam, Zhe, Sachs and Beeler fails to teach correcting optical flow tracking comprises at least one of:
i. altering a position of a selected flow point determined to be located at a distance more than a predetermined distance from a corresponding point of the 3D morphable model to reduce that distance;
ii. altering the position of the selected flow point, if a corresponding point of the 3D morphable model corresponds to a first predetermined feature, and the position of the flow point would cause the first predetermined feature to at least one of cross and intersect with a second predetermined feature; and
iii. altering the position of the selected flow point, if the position of the flow point would cause the first predetermined feature to have at least one of a positional relationship and an orientational relationship with the second predetermined feature that would be inconsistent with a predetermined relationship defined between the first predetermined feature and the second predetermined feature.
However, Deng teaches that correcting optical flow tracking comprises at least one of:
i. altering a position of a selected flow point determined to be located at a distance more than a predetermined distance from a corresponding point of the 3D morphable model to reduce that distance (Deng, paragraph 132 teaches lowering weight values to change visibility and/or distance such as how setting a weight to 0 alters a flow point position to be 0 when the corresponding point is a point that would be occluded due to the subject’s head position);
ii. altering the position of the selected flow point, if a corresponding point of the 3D morphable model corresponds to a first predetermined feature, and the position of the flow point would cause the first predetermined feature to at least one of cross and intersect with a second predetermined feature (Deng, paragraph 132, teaches if a landmark on the outside corner of the right eye [which is a predetermined feature] is not visible while turning the head, then the weight value would be adjusted accordingly for it); the weight value is adjusted to 0 because if it wasn’t then the flow point would cross/intersect with another feature which is visible with the head turned; and
iii. altering the position of the selected flow point, if the position of the flow point would cause the first predetermined feature to have at least one of a positional relationship and an orientational relationship with the second predetermined feature that would be inconsistent with a predetermined relationship defined between the first predetermined feature and the second predetermined feature.
Deng is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of video-based activity recognition regarding 3D models. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang, Arumugam, Zhe, Sachs and Beeler to incorporate the teachings of Deng to ensure correct distancing which would lead to a correct output (Deng paragraph 132). This is because lowering weight of a feature would mean reducing importance of that feature. This means when calculating distance, less emphasis is given to the particular feature leading to a changed distance calculation which would reflect a more accurate distance.
Regarding claim 15, the system claim is similar to method claim 9, and thus is rejected under similar rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAUMAN U AHMAD whose telephone number is (703)756-5306. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.U.A./Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611