DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a server communication module”, “a communication module” and “a motion recognition module” in claims 1-3, 6 and 11. (The specification has support for the server “communication module”; see paragraphs 0072 and 0090 and the “motion recognition module”; see paragraphs 0064 and 0092 of the published application).
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites the limitation "the metadata for the user interaction”. There is insufficient antecedent basis for this limitation in the claim. Claim 1 recites “metadata for the edited user interaction”. The Examiner suggests keeping the language consistent.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 5-9 and 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Palmer (US 20200120400).
Regarding claim 1, Palmer discloses a server for providing a three-dimensional (3D) video content service, the server comprising:
a server communication module (server; see at least paragraphs 0141, 0169 and 0188);
a server memory (server; see at least paragraphs 0141, 0169 and 0188); and
a server processor connected to the server communication module and the server
memory (server; see at least paragraphs 0141, 0169 and 0188),
wherein, when user interaction editing information for 3D video content is input (retrieving edit decision list EDL; see at least Fig. 1 and paragraph 0142), the server processor edits a user interaction for the 3D video content according to the user interaction editing information and generates and stores metadata for the edited user interaction (processing the EDL to generate interactive media content and recording the user interactions; see at least Fig. 1 and paragraphs 0141 and 0145).
Regarding claim 2, Palmer discloses the server of claim 1, wherein, when a video content play device requests the server processor to play the 3D video content through the server communication module, the server processor transmits 3D video content information including at least one of the 3D
video content, metadata for the 3D video content, the metadata for the user interaction, and 3D video content player information to the video content play device such that the 3D video content is played by the video content play device (prefetching and buffering the interactive media content; see at least paragraphs 0147, 0200-0201, 0231, 0233 and 0246 and playing the interactive media content; see at least paragraphs 0166, 0173 and 0200).
Regarding claim 3, Palmer discloses the server of claim 2, wherein the server processor transmits a user interaction guide message for requesting a corresponding user interaction at a user interaction occurrence time which is set according to a scenario of the played 3D video content to the video content play device through the server communication module (such as instructing a user to open their mouth; see at least Figs. 7C-7J paragraphs 0204-0206).
Regarding claim 5, Palmer discloses the server of claim 1, wherein the server processor recognizes a video player which will play a plurality of types of 3D video content and user interaction information using an interactive 3D video content player application programming interface (API) and an
interaction API (see at least paragraphs 0166, 0173, 0200 and 0242).
Regarding claim 6, Palmer discloses a device for playing video content, the device comprising:
a communication module (see at least Fig. 3 and paragraph 0186);
a memory (see at least Fig. 3 and paragraph 0186);
a motion recognition module (see at least Fig. 3 and paragraph 0186); and
a processor connected to the communication module, the memory, and the motion
recognition module (see at least Fig. 3 and paragraphs 0186-0187),
wherein the processor receives three-dimensional (3D) video content from a 3D video content service server through the communication module (by prefetching and buffering interactive media content; see at least paragraphs 0147, 0200-0201, 0231, 0233 and 0246, wherein the interactive content can be AR and 3D content; see at least paragraphs 0154-0156 and 0205) and plays the 3D video content according to a 3D video content play request (playing the interactive media content; see at least paragraphs 0166, 0173 and 0200), when a user interaction is recognized by the motion recognition module while the 3D video content is played, determines whether the recognized user interaction is appropriate, and when the recognized user interaction is appropriate, plays 3D video content corresponding to the recognized user interaction (when a user performs an interaction, interactive content is overlaid based on the interaction such as an AR mask; see at least Figs. 7C-7J paragraphs 0204-0206).
Regarding claim 7, Palmer discloses the device of claim 6, wherein, in response to the 3D video content play request, the processor receives 3D video content information including at least one of the 3D video content, metadata for the user interaction, metadata for the 3D video content, and 3D video
content player information for playing the 3D video content from the 3D video content
service server (prefetching, buffering and playing the interactive media content; see at least paragraphs 0147, 0166, 0200-0201, 0231, 0233 and 0246).
Regarding claim 8, Palmer discloses the device of claim 7, wherein the processor plays the 3D video content using a 3D video content player corresponding to the 3D video content player information (playing the interactive media content; see at least paragraphs 0166, 0173 and 0200. Furthermore, the player information is alternative language).
Regarding claim 9, Palmer discloses the device of claim 6, wherein, when a user interaction guide message is received from the 3D video content service server while the 3D video content is played, the processor outputs interaction guide information for describing user interactions applied to the video content play device (such as instructing a user to open their mouth; see at least Figs. 7C-7J paragraphs 0204-0206).
Regarding claim 11, Palmer discloses the device of claim 6, wherein the motion recognition module includes at least one of a leap motion device, a webcam, and a touchscreen (see at least paragraphs 0146 and 0172).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 12-16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Palmer in view of Inch (US 20210124475).
Regarding claim 12, Palmer discloses a method of providing a three-dimensional (3D) video content service according to a user interaction, the method comprising:
providing, by a 3D video content service server, an interactive 3D video
content play environment on a video content play device (downloading an application software to a user device for generating interactive media content; see at least paragraphs 0081-0082, 0188 and 0192. Furthermore, the interactive content can be AR and 3D content; see at least paragraphs 0154-0156 and 0205);
transmitting, by the 3D video content service server, 3D video content information of
3D video content requested by the video content play device to the video content play device (by prefetching and buffering the interactive media content; see at least paragraphs 0147, 0200-0201, 0231, 0233 and 0246);
playing, by the video content play device, the 3D video content (playing the interactive media content; see at least paragraphs 0166, 0173 and 0200);
when the played 3D video content requires a user interaction, transmitting, by the 3D
video content service server, a user interaction guide message to the video content play
device (such as instructing a user to open their mouth; see at least Figs. 7C-7J paragraphs 0204-0206); and
when a user interaction is recognized, playing, by the video content play device, 3D
video content corresponding to the recognized user interaction (when a user performs an interaction, interactive content is overlaid based on the interaction such as an AR mask; see at least Figs. 7C-7J paragraphs 0204-0206).
Palmer is not clear about a web-based interactive video content play environment.
Inch discloses the above missing limitation; a 3D web interaction system that allow a user to select a content item from a browser, displayed in an artificially reality environment; see at least paragraphs 0014-0016.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Palmer by the teachings of Inch by having the above limitations so to be able to allow a user to experience different parts of the content and change viewpoints as the user’s perspective changes; see at least paragraph 0014.
Regarding claim 13, Palmer in view of Inch disclose the method of claim 12, wherein the playing of the 3D video content comprises playing, by the video content play device, the 3D video content using a 3D video content player corresponding to 3D video content player information included in the 3D video content information (Palmer; playing the interactive media content; see at least paragraphs 0166, 0173 and 0200).
Regarding claim 14, Palmer in view of Inch disclose the method of claim 12, wherein the playing of the 3D video content corresponding to the recognized user interaction comprises:
outputting, by the video content play device, interaction guide information for describing user interactions applied to the video content play device (Palmer; such as instructing a user to open their mouth; see at least Figs. 7C-7J paragraphs 0204-0206);
when the user interaction is recognized, determining, by the video content play device, whether the recognized user interaction is appropriate (Palmer; when a user performs an interaction, interactive content is overlaid based on the interaction such as an AR mask; see at least Figs. 7C-7J paragraphs 0204-0206); and
when the user interaction is appropriate, playing, by the video content play device, 3D video content corresponding to the recognized user interaction (Palmer; playing the interactive media content; see at least paragraphs 0166, 0173 and 0200).
Regarding claim 15, Palmer in view of Inch disclose the method of claim 14, wherein the outputting of the interaction guide information comprises recognizing, by the video content play device, a type of video content play device using a device interaction application programming interface (API) (Palmer; see at least paragraph 0242) and outputting interaction guide information for describing user interactions applied to the recognized type of video content play device (Palmer; playing the interactive media content; see at least paragraphs 0166, 0173, 0200 and 0242).
Regarding claim 16, Palmer in view of Inch disclose the method of claim 14, wherein the determining of whether the recognized user interaction is appropriate comprises determining, by the video content play device, whether the recognized user interaction is appropriate for user interaction metadata which is set for a corresponding point in time (Palmer; when a user performs an interaction at any time, interactive content is overlaid based on the interaction such as an AR mask; see at least Figs. 7C-7J paragraphs 0204-0206).
Regarding claim 18, Palmer in view of Inch disclose the method of claim 12, further comprising:
receiving, by the 3D video content service server, user interaction editing information for the 3D video content to edit the user interaction for the 3D video content (Palmer; retrieving edit decision list EDL; see at least Fig. 1 and paragraph 0142); and
generating, by the 3D video content service server, metadata for the edited user interaction (Palmer; by processing the EDL to generate interactive media content and recording the user interactions; see at least Fig. 1 and paragraphs 0141 and 0145).
Claims 4 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Palmer in view of Bragdon (US 2013/0346917).
Regarding claim 4, Palmer discloses the server of claim 3, wherein, when a user interaction recognition success message is received from the video content play device, the server processor plays 3D video content corresponding to the user interaction (playing the interactive media content; see at least paragraphs 0166, 0173, 0200 and 0242), but is not clear about a user interaction recognition success message is received from a device.
Bragdon discloses the above missing limitation; a user interaction log file that captures certain user actions is collected and analyzed; see at least paragraphs 0015 and 0025-0027.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Palmer by the teachings of Bragdon by having the above limitations so to be able to provide a developer with insight into the user’s behavior when using an application; see at least the Abstract.
Regarding claim 10, Palmer discloses the device of claim 6, wherein the processor determines whether the recognized user interaction is appropriate for user interaction metadata which is set for a corresponding point in time, and when the recognized user interaction is appropriate, receives 3D
video content corresponding to the user interaction from the 3D video content service server,
and plays the received 3D video content (when a user performs an interaction, interactive content is overlaid based on the interaction such as an AR mask; see at least Figs. 7C-7J paragraphs 0204-0206), but is not clear about transmitting a user interaction recognition success message to a server.
Bragdon discloses the above missing limitation; a user interaction log file that captures certain user actions is collected and analyzed; see at least paragraphs 0015 and 0025-0027.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Palmer by the teachings of Bragdon by having the above limitations so to be able to provide a developer with insight into the user’s behavior when using an application; see at least the Abstract.
Claim 17 rejected under 35 U.S.C. 103 as being unpatentable over Palmer in view of Inch and further in view of Bragdon.
Regarding claim 17, Palmer in view of Inch disclose the method of claim 14, wherein the playing, by the video content play device, of the 3D video content corresponding to the recognized user interaction when the user interaction is appropriate comprises:
providing 3D video content corresponding to the recognized user interaction to the video content play device (by prefetching and buffering the interactive media content; see at least paragraphs 0147, 0200-0201, 0231, 0233 and 0246); and
receiving and playing, by the video content play device, the 3D video content corresponding to the user interaction (Palmer; playing the interactive media content; see at least paragraphs 0166, 0173, 0200 and 0242), but are not clear about transmitting by a play device, a user interaction recognition success message to a server and analyzing, by the server, user interaction metadata included in the message to recognize user interaction.
Bragdon discloses the above missing limitation; a user interaction log file that captures certain user actions is collected from multiple users and analyzed by a server; see at least paragraphs 0015 and 0025-0027.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Palmer in view of Inch by the teachings of Bragdon by having the above limitations so to be able to provide a developer with insight into the user’s behavior when using an application; see at least the Abstract.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YASSIN ALATA whose telephone number is (571)270-5683. The examiner can normally be reached Mon-Fri 7-4 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YASSIN ALATA/Primary Examiner, Art Unit 2426