DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114 was filed in this application after a decision by the Patent Trial and Appeal Board, but before the filing of a Notice of Appeal to the Court of Appeals for the Federal Circuit or the commencement of a civil action. Since this application is eligible for continued examination under 37 CFR 1.114 and the fee set forth in 37 CFR 1.17(e) has been timely paid, the appeal has been withdrawn pursuant to 37 CFR 1.114 and prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant’s submission filed on 12/23/2025 has been entered.
Response to Arguments
Applicant's arguments filed on 12/23/2025 have been fully considered but they are not persuasive.
Applicant argues: “As a preliminary matter, the Decision on Appeal clarified that Applicant's specification is not itself prior art, but describes an "MPEG video standard." 1 The MPEG video standard to which the Decision on Appeal refers is ISO/IEC DIS 23090-14:2021 (i.e., ISO/IEC DIS 23090- 14), which was cited in an IDS dated May 18, 2022. As such, Applicant respectfully requests that this document be cited when relied upon as a reference in support of rejections, for clarity.”
Examiner notes that PTAB Decision on 10/27/2025 (PTAB Decision”) does not in fact refer to ISO/IEC DIS 23090-14:2021 (i.e., ISO/IEC DIS 23090- 14) as a basis for its decision. PTAB Decision has affirmed the reasons for rejection based on Admitted Prior Art (AAPA), Khan, Graziosi, therefore an additional reference need not be cited in support of these rejections at this time.
Cumulatively note that: A statement by an applicant in the specification or made during prosecution identifying prior art is an admission which can be relied upon for both anticipation and obviousness determinations, regardless of whether the admitted prior art would otherwise qualify as prior art under the statutory categories of 35 U.S.C. 102. Riverwood Int ’l Corp. v. R.A. Jones & Co., 324 F.3d 1346, 1354, 66 USPQ2d 1331, 1337 (Fed. Cir. 2003); Constant v. Advanced Micro-Devices Inc., 848 F.2d 1560, 1570, 7 USPQ2d 1057, 1063 (Fed. Cir. 1988).
Further arguments are directed to patentability of newly amended claims. See updated reasons for rejection of the newly amended claim language below. Note that claim 1 appears to be a broader version of the previously rejected Claim 5 that was affirmed by the PTAB Decision. See corresponding reasons for rejection below.
Of note, Applicant argues regarding: “Instead, Kahn describes generating a "sphere-tree" when an object is loaded.2 That is, Kahn loads an object and uses that loaded object to generate the sphere-tree. 3 The sphere-tree of Kahn is therefore not the same as camera control data that is extracted from an MPEG scene description, as in amended claim 1. Instead, in Kahn, object data is used to generate the sphere-tree.”
Examiner notes that the claim neither limits the steps of extracting nor otherwise precludes the extracted data from taking the form of a sphere-tree or other zones described in Kahn. In fact Claim 8 specifically describes a sphere as an example bounding volume.
Applicant argues: “To the extent that Graziosi may describe "motion tracking data," Graziosi explains that such motion tracking data represents motion of the 3D objects, 6 but not "camera control data," as in amended claim 1. …”
Examiner notes that Applicant has already submitted a narrower version of this claim language and similar arguments to appeal and the rejection was affirmed by the PTAB Decision. Examiner suggests that the claims elaborate on the methods of encoding and using this data in presentation.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This paragraph describes the treatment of admitted prior art. In describing an invention, Applicant must inevitably reference that which is known in the art as the basis for the invention, however it is important that the claims particularly point out and distinctly claim that which Applicant regards to be his own invention. See 35 U.S.C. 112 (b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. A statement by an applicant in the specification or made during prosecution identifying prior art is an admission which can be relied upon for both anticipation and obviousness determinations, regardless of whether the admitted prior art would otherwise qualify as prior art under the statutory categories of 35 U.S.C. 102. Riverwood Int ’l Corp. v. R.A. Jones & Co., 324 F.3d 1346, 1354, 66 USPQ2d 1331, 1337 (Fed. Cir. 2003); Constant v. Advanced Micro-Devices Inc., 848 F.2d 1560, 1570, 7 USPQ2d 1057, 1063 (Fed. Cir. 1988). The examiner must determine whether the subject matter identified as prior art is applicant’s own work, or the work of another. In the absence of another credible explanation, examiners should treat such subject matter as the work of another. MPEP 2129.
Claims 1-4, 6-17, 19-30, 32-40 are rejected under 35 U.S.C. 103 as being unpatentable over Applicant admitted Prior Art in the Specification (“AAPA”) in view of US 20060227134 to Khan (“Khan “) also cited in an IDS, and in view of US20190236809 to Graziosi (“Graziosi”). The reasons for rejection are consistent with the PTAB decision affirming the reasons for rejection on 10/27/2025 (“PTAB Decision”).
Regarding Claim 1: “A method of retrieving media data, the method comprising:
receiving, by a presentation engine, streamed media data including an MPEG scene description (“A recent MPEG Scene Description element includes support for timed media in glTF 2.0. A media access function (MAF) offers an application programming interface (API) to a presentation engine, through which the presentation engine may request timed media. A retrieval unit executing the MAF may process the retrieved timed media data and pass the processed media data to the presentation engine in a desired format through circular buffers.” AAPA, Specification, Paragraph 6. See similarly in Graziozi, Paragraphs 18, 25.)
representing a virtual three-dimensional scene including object description data for each object of a set of virtual solid objects including at least one virtual solid object; (See using MPEG Scene Description data to represent walls or other objects in AAPA, Specification, Paragraph 6. See similarly in Graziozi, Paragraphs 18, 25 and Khan, Paragraphs 5, 43 and example solid virtual objects in a 3D scene in Figs. 7, 10.)
receiving, by the presentation engine, camera movement data from a user requesting that the virtual camera move through the at least one virtual solid object; and (“Thus, users are typically able to move freely in a 3D scene (e.g., through walls displayed in the 3D scene).” AAPA, Specification Paragraph 6. Similarly, “Freeform camera motion allows the user to navigate to any point in space” including movement requests through virtual solid objects. Khan, Paragraphs 39. See similarly in AAPA, Specification, Paragraph 6.)
AAPA does not teach the claim features below:
Khan teaches these features in the context of a user interface for displaying and interacting with 3D objects:
(AAPA teaches MPEG scene description but does not teach “extracting of the camera control data.” Khan teaches that the surface following mode extracts/generates camera control data based on “a surface of an object” in the scene. See, Khan, Paragraph 39. “When in the surface following mode … an indexing structure, conventionally called a sphere-tree, is generated when the user loads an object” thus the limitations are received based on the object video data. Khan, Paragraph 43 and ways of loading data in Paragraph 71.)
the camera control data being separate and distinct from the
object description data and defining permissible (“The behavior of the invention could be considered to be like a camera that hovers above a surface … For specific surface-based tasks like 3D painting or sculpting, the present invention provides a subset of this freedom with the benefit of following the surface, …“ a set of data separate and distinct from object description data that limits the permissible locations for a virtual camera. Khan, Paragraphs 38-40, 43.)
excluding movements through any object of the set of virtual solid objects from the permissible movements; (“a subset of this freedom with the benefit of following the surface. … The surface following camera orbit distance will always be between the inner limit 148 (FIG. 2) and the outer limit 146” thus following the surface of an object within an orbit distance around the object that prevents movements through the object. See Khan, Paragraphs 38-40, and Fig. 2.)
using the camera control data, updating, by the presentation engine, a location of the virtual camera to ensure the virtual camera only moves according to the permissible movements and does not move through the at least one virtual solid object.” (“For specific surface-based tasks like 3D painting or sculpting, the present invention provides a subset of this freedom with the benefit of following the surface,” a set of data that limits the permissible locations for a virtual camera on the outside of the 3D object. “The surface following camera orbit distance will always be between the inner limit 148 (FIG. 2) and the outer limit 146” thus following the surface of an object within an orbit distance around the object that prevents movements through the object. See Khan, Paragraphs 38-40, and Fig. 2.)
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of AAPA to perform the above claimed functions of extracting camera control data from received scene media, including data defining permissible locations for a virtual camera and ensuring the virtual camera remains within the permissible locations around the object (and does not move through the object) using that data, as taught in Khan, for the ”benefit of following the surface” of the object with the camera. Khan, Paragraphs 39-40.)
Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness.
Cumulatively, Khan does not teach: “streamed media data [including an MPEG scene description representing a virtual three-dimensional scene including object description data]”
AAPA discloses industry standards for streaming video such as MPEG in Specification, Paragraph 6, and Khan indicates that process and data structures can be distributed and downloaded over the internet in Paragraph 71. So, they indicate that this operation is intended for use with streaming data but do not state so explicitly.
Graziosi confirms that this use was known in the art of video encoding and decoding in the context of video streaming and video-conferencing systems: The plurality of 3D geometric meshes may be encoded using 3D object encoding techniques, such as Moving Picture Expert Group-4 (MPEG-4) an animation framework extensions (AFX) encoding method, and the like, known in the art.“ Graziozi, Paragraph 25. “The encoder 206 may output the one or more bitstreams corresponding to each of the plurality of 3D geometric meshes … bitstreams may include position information corresponding to each of the plurality of objects 304 in the 3D space 302. … for free-view or multi-view applications,” thus including data for limited view applications. See Graziosi, Paragraphs 46-47. Cumulatively, this position information is applied to the “the generated 3D geometric mesh and the motion tracking data” of the kind used in Khan to designate camera positions around each object. Graziosi, Paragraph 23, 25.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Graziozi to receive “streamed media data including an MPEG scene description representing a virtual three-dimensional scene including object description data” embodied in one or a set of 3D meshes, as taught in Graziozi, in order to transmit video data encoded under the industry standards such as MPEG. Graziosi, Paragraphs 46, 25.
Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness.
Regarding Claim 2: “The method of claim 1, wherein updating the location of the virtual camera comprises preventing the virtual camera from passing through the at least one virtual solid object.” (“The surface following camera orbit distance will always be between the inner limit 148 (FIG. 2) and the outer limit 146” which prevents the virtual camera from being too close to the object or going through it. Khan, Paragraph 40. See statement of motivation in Claim 1.)
Regarding Claim 3: “The method of claim 1, wherein the streamed media data comprises glTF 2.0 media data.” (This appears to be a common format for timed media. AAPA, Specification, Paragraph 6.)
Regarding Claim 4: “The method of claim 1, wherein receiving the streamed media data comprises requesting the streamed media data from a retrieval unit via an application programming interface (API).” (This appears to be a conventional way to request data retrieval: “A media access function (MAF) offers an application programming interface (API) to a presentation engine, through which the presentation engine may request timed media.” AAPA, Specification, Paragraph 6.)
Regarding Claim 6: “The method of claim 1,
wherein the camera control data includes data defining two or more anchor points and one or more segments between the anchor points, the segments representing permissible camera movement vectors for the virtual camera, and (“The surface-following process is applied 194 to the motion resulting in target eye and look at points [initial two or more anchor points]. Then motion clipping is applied 196 to produce new eye target and look at points [next two or more anchor points]. Then, the eye point and look at point are moved 198 to these new points.” Khan, Paragraph 42 and statement of motivation in Claim 1.)
wherein updating the location of the virtual camera comprises allowing the virtual camera to only traverse the segments between the anchor points.” (“The surface-following process is applied 194 to the motion resulting in target eye and look at points. Then motion clipping is applied 196 to produce new eye target and look at points. Then, the eye point and look at point are moved 198 to these new points,” thus traversing the segments between these anchor points. Khan, Paragraph 42 and statement of motivation in Claim 1.)
Regarding Claim 7: “The method of claim 1,
wherein the camera control data includes data defining a bounding volume representing a permissible camera movement volume for the virtual camera, and (See the permissible / bounding volume defined as a space between outer limits of camera orbit in Khan, Paragraph 40 and examples in Figs. 2-3, 15, 19, 22. See statement of motivation in Claim 1.)
wherein updating the location of the virtual camera comprises allowing the virtual camera to only traverse the permissible camera movement volume.” (See the permissible / bounding volume defined as a space between outer limits of camera orbit in Khan, Paragraph 40 and examples in Figs. 2-3, 15, 19, 22. See statement of motivation in Claim 1.)
Regarding Claim 8: “The method of claim 7, wherein the data defining the bounding volume comprises data defining at least one of a cone, a frustrum, or a sphere.” (See examples of spherical volumes in Khan, Figs. 7-9 and paragraphs 40 and 43, frustum in Figs. 2-3 and 10, and cone in Figs. 19, 22. See statement of motivation in Claim 1.)
Regarding Claim 9: “The method of claim 1, wherein the camera control data is included in an MPEG_camera_control extension.” (Note that prior art teaches using camera control data in the context of “conventional encoding techniques, such as MPEG-4 … and extensions of such standards, to transmit and receive digital video information more efficiently.”. See AAPA Specification, Paragraphs 3 and 6, and Graziosi, Paragraph 39. This makes it obvious that “The one or more bitstreams may include position information corresponding to each of the plurality of objects 304 in the 3D space 302, and encoding information that may comprise geometrical information (e.g. vertices, edges, or faces) and the camera parameters 310A of each of the plurality of cameras 306A to 306D,” in an extension of MPEG for storing this data. Graziosi, Paragraphs 46, 25. See statement of motivation in Claim 1.)
Regarding Claim 10: “The method of claim 9, wherein the MPEG_camera_control extension includes one or more of: … anchors data representing a number of anchor points for permissible paths for the virtual camera; … segments data representing a number of path segments for the permissible paths between the anchor points; … bounding volume data representing a bounding volume for the virtual camera; … intrinsic parameters indicating whether camera parameters are modified at each of the anchor points; and … accessor data representing an index of an accessor that provides the camera control data.” (“The one or more bitstreams may include … encoding information that may comprise geometrical information (e.g. vertices [points], edges [segments], or faces) and the camera parameters 310A of each of the plurality of cameras 306A to 306D,” in an extension of MPEG that stores this data. Graziosi, Paragraphs 46, 25. Also see treatment of this data in Claims 6-7. See statement of motivation in Claim 1.)
Regarding 11: “The method of claim 1, wherein the at least one virtual solid object comprises one of a virtual wall, a virtual chair, or a virtual table.” (This claim is rejected for reasons stated for Claim 1, because examples of virtual solid objects do not materially alter the method of inspecting any solid object of Claim 1 and prior art. Cumulatively note that Khan inspects walls of a cube in Figs. 2-3 and of a cylinder in Paragraph 38, and allows for other examples of solid objects. See statement of motivation in Claim 1.)
Regarding Claim 12: “The method of claim 1, further compising determining permissible paths for the virtual camera from the camera control data, wherein updating the location of the virtual camera comprises ensuring that the virtual camera moves only along virtual paths that are within the permissible paths defined in the camera control data.” (“For specific surface-based tasks like 3D painting or sculpting, the present invention provides a subset of this freedom with the benefit of following the surface, … Again the vector "i" may try to move off the path, a new desired vector will be computed, and the blended vector will basically move the eye back to the path represented by the black dashed line 180.” a set of data that limits the permissible paths for a virtual camera. Khan, Paragraphs 38, 41.)
Regarding Claim 13: “The method of claim 1, wherein the camera control data is included in an MPEG_mesh_collision extension.” (Note that prior art teaches using camera control data in the context of “conventional encoding techniques, such as MPEG-4 … and extensions of such standards, to transmit and receive digital video information more efficiently.”. See AAPA Specification, Paragraphs 3 and 6, and Graziosi, Paragraph 39. This makes it obvious “to utilize the MPEG AFX mesh compression technique to encode the plurality of 3D geometric meshes in a sequence. … The one or more bitstreams may include position information corresponding to each of the plurality of objects 304 in the 3D space 302, and encoding information that may comprise geometrical information (e.g. vertices, edges, or faces [of a mesh]]) and the camera parameters 310A of each of the plurality of cameras 306A to 306D,” in an extension of MPEG for storing this data. Graziosi, Paragraphs 46, 25. See statement of motivation in Claim 1.)
Claim 14, “A device for retrieving media data,” is rejected for reasons stated for Claim 1, and because prior art teaches:
“a memory configured to store media data; and one or more processors implemented in circuitry and configured to execute a presentation engine,” (“A combination of hardware and software may be a general-purpose computer system with a computer program that, when loaded and executed, may control the computer system such that it carries out the methods described herein.” Graziosi, Paragraphs 86-87 and statement of motivation in Claim 1.)
Claims 15-17, 19-26 are rejected for reason stated for Claims 2-13 respectively in view of the Claim 14 rejection.
Claim 27, “A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause a processor executing a presentation engine to: …” is rejected for reasons stated for Claim 1, and because prior art teaches: (“The present disclosure may also be embedded in a computer program product, which comprises all the features that enable the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.” Graziosi, Paragraphs 86-87 and statement of motivation in Claim 1.)
Claims 28-30, 31-39 are rejected for reason stated for Claims 2-13 respectively in view of the Claim 14 rejection.
Claim 40, “A device for retrieving media data,” is rejected for reasons stated for Claim 14, because the means of Claim 40 are embodied in the functions performed by the memory and processors of Claim 14.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIKHAIL ITSKOVICH whose telephone number is (571)270-7940. The examiner can normally be reached Mon. - Thu. 9am - 8pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at (571)272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MIKHAIL ITSKOVICH/Primary Examiner, Art Unit 2483