DETAILED ACTION
Application No. 19/210,298 filed on 05/16/2025 has been examined. In this Office Action, claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/25/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or
composition of matter, or any new and useful improvement thereof, may obtain a patent
therefor, subject to the conditions and requirements of this title.
Claims 1-2, 7-11 and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Based upon consideration of all of the relevant factors with respect to the claims as a whole, claims 1-2, 7-11 and 17-20 are determined to be directed to an abstract idea and not significantly more than the abstract idea itself. The rationale for this determination is explained below:
Claims 1, 11, 20:
At Step 1:
The claims are directed to “a method”, "a system" and “one or more non-transitory computer readable media” and thus directed to a statutory category.
At Step 2A, Prong One:
The claim recites the following limitations directed to an abstract idea:
The limitation of “matching a query time to a time interval associated with the scene”, as drafted is a process that, under broadest reasonable interpretation, covers mental process.
The limitation of “generating, via execution of a machine learning model, (i) a first set of attributes associated with a set of canonical coordinates in the scene at a starting time of the time interval and (ii) a second set of attributes associated with the set of canonical coordinates at an ending time of the time interval” and “generating, via execution of a machine learning model based on the query time and a set of canonical coordinates of a three-dimensional (3D) Gaussian in the scene, (i) a first set of deformed coordinates at a starting time of the time interval and (ii) a second set of deformed coordinates at an ending time of the time interval”, as drafted is a process that, under broadest reasonable interpretation, covers mental process.
At Step 2A, Prong Two:
The claim recites the following additional elements:
-“a non-transitory computer readable media, one or more memories, one or more processors” which are all a high-level recitation of a generic computer components and represent mere instructions to apply the judicial exception on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application and/or is Generally linking the use of the judicial exception to a particular technological environment or field of use by limiting it to a particular data source or type. See MPEP §2106.05(h) and Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data). Therefore, the limitation does not recite any improvement to the technology.
-“computing a third set of attributes associated with the set of canonical coordinates at the query time based on a spline interpolation associated with the first set of attributes and the second set of attributes” and “computing a third set of deformed coordinates at the query time based on a spline interpolation associated with the first set of deformed coordinates and the second set of deformed coordinates”, is insignificant extra-solution activity as mere data gathering such as obtaining information’. See MPEP 2106.05(g).
-“ generating a representation of the scene at the query time based on the third set of attributes” is insignificant extra-solution activity as mere data gathering such as ‘obtaining information’. See MPEP 2106.05(g).
Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application.
At Step 2B:
The conclusions for the mere implementation using a computer are carried over and does not provide significantly more.
-“computing a third set of attributes associated with the set of canonical coordinates at the query time based on a spline interpolation associated with the first set of attributes and the second set of attributes” and “computing a third set of deformed coordinates at the query time based on a spline interpolation associated with the first set of deformed coordinates and the second set of deformed coordinates” is WURC as evidenced by the court cases cited in MPEP 2106.05(d)(II) by at least "i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, ... buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)" and "iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, ... Of P Techs., 788 F.3d at 1363."
-“ generating a representation of the scene at the query time based on the third set of attributes” is WURC as evidenced by the court cases cited in MPEP 2106.05(d)(II) by at least "i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, ... buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)" and "iv. Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-9".
Accordingly, at step 2B, these additional elements, both individually and in combination, do not amount to significantly more than the judicial exception. See MPEP § 2106.05. Therefore, the claim is not eligible subject matter under 35 U.S.C. 101.
The dependent claims 2, 7-10 and 17-19 have been fully considered as well, however, similar to the findings for claims above, these claims are similarly directed to the above-mentioned groupings of abstract ideas set forth in the 2019 PEG, without integrating it into a practical application and with, at most, a general purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over LAXMINARAYANA BHAT et al (US 2014/0254934 A1) in view of Li (US 2024/0073452 A1).
As per claim 1, LAXMINARAYANA BHAT teaches a computer-implemented method for determining a time-varying deformation associated with a scene, the method comprising:
matching a query time to a time interval associated with the scene ([0027], e.g., discloses wherein the matched time stamp of the query image with the time information of meta data in the database image);
LAXMINARAYANA BHAT does not explicitly teach generating, via execution of a machine learning model, (i) a first set of attributes associated with a set of canonical coordinates in the scene at a starting time of the time interval and (ii) a second set of attributes associated with the set of canonical coordinates at an ending time of the time interval and computing a third set of attributes associated with the set of canonical coordinates at the query time based on a spline interpolation associated with the first set of attributes and the second set of attributes and generating a representation of the scene at the query time based on the third set of attributes.
However, Li teaches generating, via execution of a machine learning model, (i) a first set of attributes associated with a set of canonical coordinates in the scene at a starting time of the time interval and (ii) a second set of attributes associated with the set of canonical coordinates at an ending time of the time interval and computing a third set of attributes associated with the set of canonical coordinates at the query time based on a spline interpolation associated with the first set of attributes and the second set of attributes and generating a representation of the scene at the query time based on the third set of attributes ([0038]-[0040], e.g., discloses wherein generating and employing a machine learning model for obtaining color attributes information for a portion of a frame of 3D media content based on an input voxel coordinate and codec application configured to apply the quantization process to all frames within a particular scene and frames sequential in time).
Thus, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the claimed invention to apply the teachings of Li with the teachings of LAXMINARAYANA BHAT in order to efficiently enabling a system to access a frames of 3D media content and generate a data structure for the frames based on color attributes information of the frames (Li).
As per claim 2, further comprising determining an additional representation of the scene at an additional query time that temporally follows the ending time based on a propagation of a position included in the second set of attributes using a velocity included in the second set of attributes ([0036]-[0040], Li).
As per claim 3, further comprising: determining, based on a set of edits to one or more key frames associated with the scene, (i) a first set of updated attributes associated with the set of canonical coordinates at the starting time and (ii) a second set of updated attributes associated with the set of canonical coordinates at the ending time; computing a third set of updated attributes associated with the set of canonical coordinates based on an additional spline interpolation associated with the first set of updated attributes and the second set of updated attributes; and generating an additional representation of the scene at the query time based on the third set of updated attributes ([0036]-[0040], Li).
As per claim 4, wherein generating the first set of attributes and the second set of attributes comprises: determining (i) a first set of temporal weights associated with the starting time and (ii) a second set of temporal weights associated with the ending time; generating (i) a first time-variant spatial encoding based on the first set of temporal weights and the set of canonical coordinates and (ii) a second time-variant spatial encoding based on the second set of temporal weights and the set of canonical coordinates; and generating (i) the first set of attributes based on the first time-variant spatial encoding and (ii) the second set of attributes based on the second time- variant spatial encoding ([0036]-[0046], Li).
As per claim 5, wherein generating the first set of attributes and the second set of attributes further comprises: aggregating features corresponding to the first time-variant spatial encoding or the second time-variant spatial encoding; and decoding, via execution of one or more layers included in the machine learning model, the aggregated features into the first set of attributes or the second set of attributes ([0036]-[0051], Li).
As per claim 6, wherein the first set of attributes and the second set of attributes are further generated based on at least one of a time- invariant base encoding or a set of residual encodings ([0036]-[0040], Li).
As per claim 7, wherein computing the third set of attributes comprises: determining, within the time interval, a relative time that corresponds to the query time; and performing the spline interpolation based on the relative time, the first set of attributes, and the second set of attributes ([0036]-[0040], Li).
As per claim 8, wherein the representation of the scene comprises a three-dimensional (3D) Gaussian that is parameterized based on the third set of attributes ([0036]-[0046], Li).
As per claim 9, wherein the first set of attributes and the second set of attributes comprise at least one of a position or a velocity ([0036]-[0056], Li).
As per claim 10, wherein the spline interpolation is associated with a cubic Hermite spline ([0028], Li).
Regarding claims 11, 20, claims 11, 20 are rejected for substantially the same reason as claim 1 above.
Regarding claims 12-19, claims 12-19 are rejected for substantially the same reason as claims 2-10 above.
It is noted that any citation [[s]] to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any wav. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. [[See, MPEP 2123]].
Citation of Pertinent Prior Arts
The prior art made of record and not relied upon in form PTO-892, if any, is considered pertinent to applicant's disclosure.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mohammad A Sana whose telephone number is (571)270-1753. The examiner can normally be reached Monday-Friday 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sanjiv Shah can be reached at 5712724098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Mohammad A Sana/Primary Examiner, Art Unit 2166