DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Preliminary Amendments
The Examiner notes that 2 preliminary amendments had been submitted on September 3 and September 20 of 2024. However, all amendments fail to comply with 37 CFR 1.121 and MPEP 714IIC. Please comply with the next response.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “user interface module” in Claims 1-2, “data acquisition module” in Claims 1 and 4, “virtual reality (VR) module” in Claims 1 and 5, “artificial intelligence (AI) module” in Claims 1 and 6, “mixed reality integration module” in Claims 1 and 7, “recording module” in Claims 1 and 8.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 4, 9, 12, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai (US 20240212223A1) in view of Kim (US 20190164346A1).
As per Claim 1, Palamadai teaches a system for facilitating interactive experiences with famous personalities (enable the creation of immersive and personalized experiences, the ability to interact with a celebrity with whom a user might not have the opportunity to interact in person, [0011]), comprising: a computing device having a processor and a memory in communication with said processor configured to store one or more instructions executable by said processor (non-transitory computer-readable medium stores instructions which, when executed by a processing system, cause the processing system to perform operations, [0004]), wherein said computing device is in communication with a server and a database via a network (network 102 may include an application server (AS) 104, which may comprise a computing system, network 102 may also include a database 106 that is communicatively coupled to the AS 104, [0017]), wherein said computing device is configured to execute a plurality of modules [0004] includes: a user interface module configured to receive inputs from a user by selecting a famous personality (the user select one individual from the multiple individuals, [0038]) and a corresponding virtual environment (extended reality environment may emulate a real world location, which may be selected by the user, depending on the nature of the interaction, the extended reality environment may emulate the user’s living room, the home of someone, a coffee shop, an office, a golf course, or any other locations, [0043]); a data acquisition module configured to retrieve information about the selected famous personality (storage device (database server), to store profiles for various individuals, where the individuals may include celebrities, profile containing information about the individual which may be used to control a dynamic interaction with a user, [0020]) and the corresponding virtual environment [0043]; a virtual reality (VR) module (extended reality is an umbrella term that has been used to refer to various different forms of immersive technologies, including virtual reality, [0002]) configured to generate and display the selected famous personality and the corresponding virtual environment based on the user inputs (adaptive simulation of celebrity avatars in extended reality environments, [0003], [0043]); an artificial intelligence (AI) module configured to tailor interactions between the user and the selected famous personality (utilize artificial intelligence to align the behavior of an avatar (which may represent a celebrity) with the expectations of a user who is interacting with the avatar, [0066]) according to the user inputs and preferences (records for the user’s virtual interactions could be used to train a machine learning model to predict which individuals’ avatars the user responds best to, learn what the processing system may get right or wrong about simulating an individual’s appearance, voice, behavior, or the like via the avatar, and to learn other user preferences, this may in turn help the processing system to make better recommendations to the user in the future (to fine tune the matching of avatars to the user’s preferences, to refine the presentation of the avatars, etc.), [0063]); a mixed reality integration module configured to blend virtual elements with real-world environments to enhance the immersive experience (extended reality is an umbrella term that has been used to refer to various different forms of immersive technologies, including mixed reality, extended reality technologies allow virtual world objects to be brought into real world environments, [0002]); and a recording module configured to capture and store one or more video of the user’s interactions with the famous personalities (store a record of the virtual interaction, [0060], adaptive simulation of celebrity avatars in extended reality environments, rendering an extended reality environment in which the virtual interaction will occur, [0003]), whereby the system provides an immersive and personalized entertainment experience for the user by executing the plurality of modules to facilitate, engage [0011], and record interactions with the famous personalities [0060, 0003].
However, Palamadai does not expressly teach capturing and storing one or more photographic records of the user’s interactions with the famous personalities. However, Kim teaches a recording module configured to capture and store one or more photographic records of the user’s interactions with the famous personalities (enabling a cybernaut to virtually take a photo with a famous person through an AR function providing display, [0002]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai to include capturing and storing one or more photographic records of the user’s interactions with the famous personalities because Kim suggests that this is well-known in the art (in a conventional AR service providing a method using an AR technique to enable a cybernaut virtually to take a photo with a famous person, [0003]).
As per Claim 4, Palamadai teaches wherein the data acquisition module is further configured to store metadata related to each famous personality, including biographical information (actor may provide some biographical data, [0040]), typical behaviors (behavior are specified in the famous golfer’s profile, [0028], store profiles for various individuals, [0020]), and signature phrases (if the individual uses any catchphrases, the avatar may be programmed to utilize those distinct catchphrases, [0044]), and update the database periodically to include newly available famous personalities (create an entirely new avatar (with an associated profile), [0037], [0020]) and environments [0043].
As per Claims 9 and 12, these claims are similar in scope to Claims 1 and 4 respectively, and therefore are rejected under the same rationale.
As per Claim 17, Claim 17 is similar in scope to Claim 9, except Claim 17 is directed to a non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform the method of Claim 9. Palamadai teaches a non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform the method (non-transitory computer-readable medium stores instructions which, when executed by a processing system, cause the processing system to perform operations, [0004]). Thus, Claim 17 is rejected under the same rationale as Claim 9.
Claim(s) 2 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai (US 20240212223A1) and Kim (US 20190164346A1) in view of Munari (US009764225B1) and Anderegg (US 20210008461A1).
As per Claim 2, Palamadai and Kim are relied upon for the teachings as discussed above relative to Claim 1. Palamadai teaches a user interface module configured to receive inputs from a user by selecting a famous personality [0038] from various genres including movie stars, movie characters (user may specify the desire to talk to Han Solo, [0032]), sports stars, sports entertainment figures (interact with the avatar 118 of a famous golfer, [0028]), cartoons (popular cartoon character, [0067]), political figures, historical figures, fictional characters, or the like (identify a specific celebrity, a specific historical figure, a specific fictional character, or the like, the user may specify the desire to talk to Abraham Lincoln, [0032]). Since Palamadai teaches “or the like”, it would have been obvious to one of ordinary skill in the art that this includes music stars, television stars, superheroes, supervillains, video game characters, and mythological figures. The user interface module is configured to receive inputs from a user by selecting
a corresponding virtual environment [0043].
However, Palamadai and Kim do not expressly teach wherein the user interface module is further configured to present a selection interface, which displays a collection of famous personalities. However, Munari teaches wherein the user interface module is further configured to present a selection interface, which displays a collection of famous personalities (video display 111 presents a list of celebrities, upon receiving a celebrity selection, col. 6, lines 44-47).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai and Kim so that the user interface module is further configured to present a selection interface, which displays a collection of famous personalities because Munari suggests that this makes it easy for a user to select their desired celebrity (col. 6, lines 44-47).
However, Palamadai, Kim, and Munari do not expressly teach wherein the user interface module is further configured to present the selection interface, which displays corresponding virtual environments. However, Anderegg teaches wherein the user interface module is further configured to present the selection interface, which displays corresponding virtual environments (user 120 may select one of virtual environments 236a or 236b from thumbnails depicting those virtual environments, displayed on display screen, [0049]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai, Kim, and Munari so that the user interface module is further configured to present the selection interface, which displays corresponding virtual environments because Anderegg suggests that this makes it easy for a user to select their desired virtual environment [0049].
As per Claim 10, Claim 10 is similar in scope to Claim 2, and therefore is rejected under the same rationale.
Claim(s) 3 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai (US 20240212223A1), Kim (US 20190164346A1), Munari (US009764225B1), and Anderegg (US 20210008461A1) in view of Williams (US 20250218123A1) and McGucken (US 20090017886A1).
As per Claim 3, Palamadai, Kim, Munari, and Anderegg are relied upon for the teachings as discussed above relative to Claim 2. Palamadai teaches wherein the selection interface is configured to allow the user to customize their virtual experience by selecting specific activities and interactions, include sports-play, and dialogue exchange with the chosen famous personalities ([0038], user may be able to interact with the famous golfer, to play a round of golf, to discuss other subjects, [0028]).
However, Palamadai, Kim, Munari, and Anderegg do not expressly teach that the specific activities and interactions include singing. However, Williams teaches that the specific activities and interactions include singing (receives a request for a replicant persona to interact with a user, the requested interaction may be singing karaoke with a band or sing, playing or learning a sport with an athlete, discussing a movie with one or more of the actors, playing a game with one or celebrities, and/or any other interaction, [0069]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai, Kim, Munari, and Anderegg so that the specific activities and interactions include singing as suggested by Williams. Singing karaoke is well-known in the art.
However, Palamadai, Kim, Munari, Anderegg, and Williams do not expressly teach that the specific activities and interactions include combat, adventure, mission-based objectives, dancing. However, McGucken teaches that the specific activities and interactions include combat (one of the heroes has a special combat move, [0590]), adventure (video game that followed The Hero’s Journey, which called the protagonist to adventure, [0044]), mission-based objectives (reconnaissance mission, [0894]), dancing (physical action must dance, [0923]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai, Kim, Munari, Anderegg, and Williams so that the specific activities and interactions include combat, adventure, mission-based objectives, dancing because McGucken suggests that this presents deeper gameplay, meaning, character, and story [0004].
As per Claim 11, Claim 11 is similar in scope to Claim 3, and therefore is rejected under the same rationale.
Claim(s) 5 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai (US 20240212223A1) and Kim (US 20190164346A1) in view of Brewer (US 20190250805A1).
As per Claim 5, Palamadai and Kim are relied upon for the teachings as discussed above relative to Claim 1.
However, Palamadai and Kim do not expressly teach wherein the VR module is further configured to render high-fidelity, immersive virtual environments using advance graphics processing techniques, and support multiple users simultaneously within the same virtual environment for collaborative interactions. However, Brewer teaches wherein the VR module is further configured to render high-fidelity, immersive virtual environments using advanced graphics processing techniques (VR devices can provide complex features and high-fidelity representations of a physical world, [0005], providing the virtual environments as an immersive experiences for VR users, [0021]), and support multiple users simultaneously within the same virtual environment for collaborative interaction (collaborative virtual environment among a plurality of user devices, [0006]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai and Kim so that the VR module is further configured to render high-fidelity, immersive virtual environments using advanced graphics processing techniques because Brewer suggests that this is well-known in the art [0005]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai and Kim to include supporting multiple users simultaneously within the same virtual environment for collaborative interaction because Brewer suggests that this way, users can collaborate in new ways that enhance decision-making, reduce product development timelines [0053].
As per Claim 13, Claim 13 is similar in scope to Claim 5, and therefore is rejected under the same rationale.
Claim(s) 6 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai (US 20240212223A1) and Kim (US 20190164346A1) in view of Williams (US 20250218123A1).
As per Claim 6, Palamadai and Kim are relied upon for the teachings as discussed above relative to Claim 1.
However, Palamadai and Kim do not expressly teach wherein the AI module is further configured to enable the selected famous personality to address the user by name during interactions, thereby enhancing the personalized experience. However, Williams teaches wherein the AI module (replicant persona is an artificial intelligence driven digital recreation of an individual such as celebrities, [0021]) is further configured to enable the selected famous personality to address the user by name during interactions, thereby enhancing the personalized experience (avatar may know the user’s name and call them by name directly, [0032]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai and Kim so that the AI module is further configured to enable the selected famous personality to address the user by name during interactions, thereby enhancing the personalized experience because Williams suggests that this makes the interactions more personalized [0032].
As per Claim 14, Claim 14 is similar in scope to Claim 6, and therefore is rejected under the same rationale.
Claim(s) 7 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai (US 20240212223A1) and Kim (US 20190164346A1) in view of Bradski (US 20160026253A1).
As per Claim 7, Palamadai and Kim are relied upon for the teachings as discussed above relative to Claim 1. Palamadai teaches wherein the mixed reality integration module is further configured to utilize augmented reality (AR) technology to project virtual elements [0002], include unique locations, into the user’s real-world environment, thereby enhancing the overall immersive and interactive experience [0043].
However, Palamadai and Kim do not expressly teach that the virtual elements include character-specific transportation vehicles, vessels, realms, worlds, planets, devices, weapons, and accessories. However, Bradski teaches wherein the mixed reality integration module is further configured to utilize AR technology to project virtual elements [0168], include character-specific transportation vehicles, vessels, realms, worlds, planets, into the user’s real-world environment, thereby enhancing the overall immersive and interactive experience (second virtual décor may replicate a command deck of a spacecraft (Starship) with a view of a planet, [1414]), devices, weapons and accessories (AR system to customize their weapons, [1562], user may select customizations via a virtual customization user interface renders to each user’s field of view by their respective individual AR systems, the users may pick custom accessories (scopes, night vision scopes, laser scopes, fins, lights), [1563], augmented reality scene may allow a user of AR technology may see virtual objects super-imposed on or amidst real world objects, [0002]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai and Kim so that the virtual elements include character-specific transportation vehicles, vessels, realms, worlds, planets, devices, weapons, and accessories because Bradski suggests that the user can pick the virtual elements that they desire to see [1563].
As per Claim 15, Claim 15 is similar in scope to Claim 7, and therefore is rejected under the same rationale.
Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Palamadai (US 20240212223A1) and Kim (US 20190164346A1) in view of Park (US 20230334775A1).
As per Claim 8, Palamadai and Kim are relied upon for the teachings as discussed above relative to Claim 1. Palamadai teaches wherein the recording module is further configured to capture high-resolution video and audio of the user’s interactions with the virtual famous personalities [0060, 0003].
However, Palamadai and Kim do not teach offering options for the user to edit and share their recorded experiences on social media platforms. However, Park teaches wherein the recording module is further configured to capture video and audio of the user’s interactions in the virtual environment, and offer options for the user to edit and share their recorded experiences on social media platforms (capturing a video from within a virtual reality (VR) environment, capture respective viewpoints of events in the VR environment, and generate a video based on the captured viewpoints, sharing the generated video to a social media platform, editing the shared video on the social media platform to generate an edited video, and posting the edited video on the social media platform, [0009]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Palamadai and Kim to include offering options for the user to edit and share their recorded experiences on social media platforms because Park suggests that this way, the user can capture the world in a VR game in a video and share it with friends so that more users can know that this world exists and how it works so that more users have the chance to join in [0084].
As per Claim 16, Claim 16 is similar in scope to Claim 8, and therefore is rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONI HSU whose telephone number is (571)272-7785. The examiner can normally be reached M-F 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JH
/JONI HSU/Primary Examiner, Art Unit 2611