DETAILED ACTION
This action is in response to communication filed on 19 September 2023. Claims 1-22 are pending in the application and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 6, 8-9 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over DONDERICI (US20230386138A1) in view of O’BRIEN et al. (US20240354807A1) and further in view of MIRANDA et al. (US11416635B1).
As to claim 1, DONDERICI teaches comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the IFEC system to: subsequent to a commercial passenger vehicle departing on a journey, generate a shared virtual environment that is accessible and traversable by passengers of the commercial passenger vehicle (see figs. 1-7, par. 0109, wherein Example 18 depicts a computer system, including a computer processor for executing computer program instructions; and one or more non-transitory computer-readable media storing computer program instructions executable by the computer processor to perform operations including: providing, to a user, a ride in a real-world environment through a vehicle, the user associated with a client device; generating a virtual scene based on the ride of the user in the vehicle, the virtual scene including a virtual representation of the user or the vehicle; and sending the virtual scene to the client device, the client device configured to display the virtual scene to the user; see also par. 0015; as taught by DONDERICI)
wherein the shared virtual environment comprises at least one interactable virtual feature related to a destination of the journey or an interior layout of the commercial passenger vehicle (see par. 0073, wherein the virtual environment manager 450 may also control or modify one or more real-world objects that are outside of the AV 110 based on the virtual environment. In an example, the virtual environment manager 450 may change a road condition (e.g., blocking a road, etc.) given that one or more conditions are not met in the virtual environment. A condition in the virtual environment may be a target virtual activity to be performed by the passenger; see also par. 0016; as taught by DONDERICI);
select, for a first passenger of the commercial passenger vehicle requesting entry into the shared virtual environment, a first virtual avatar (see par. 0014, wherein a user, such as a passenger of an AV, may engage in the virtual environment via a virtual representation (e.g., avatars) of himself or herself; see also par. 0068, wherein the virtual environment manager 450 generates a virtual scene that simulates the AV 110 and one or more passengers in the AV 110, and can place the virtual scene into a virtual environment that includes other simulated AVs and avatars of other people; as taught by DONDERICI);
operate the shared virtual environment to facilitate a virtual interaction between the first virtual avatar associated with the first passenger and a second virtual avatar that represents a second passenger of the commercial passenger vehicle (see par. 0070, wherein the passenger can interact with other people through the virtual environment. For instance, the passenger and another person can access the same virtual environment, which includes virtual representations of both. The avatar of the passenger can interact with the avatar of the person in the same virtual environment. They can talk to each other, send messages to each other, play a game together, work together, and so on. In some embodiments, the interaction may represent a real-world interaction. For instance, the virtual environment manager 450 detects an interaction between the passenger and another person, e.g., based on sensor data. The virtual environment manager 450 projects the interaction into the virtual environment. The virtual environment manager 450 may generate one or more messages based on the detected interaction and include the messages in the virtual environment. The passenger and person may continue their interaction through the virtual environment; see also par. 0052-0053, 0071 and 0088-0090; as taught by DONDERICI);
wherein the virtual interaction and the personal attributes conveyed by the first virtual avatar are captured by state data for the shared virtual environment that is locally stored onboard the commercial passenger vehicle (see fig. 3, par. 0043, wherein the fleet management system 120 includes a client device interface 310, various data stores 340-460, and a vehicle manager 370. The client device interface 310 includes a ride request interface 320 and user setting interface 330. The data stores include user ride datastore 340, map datastore 350, and user interest datastore 360; see also pars. 0046-0053; as taught by DONDERICI);
and in response to the commercial passenger vehicle concluding the journey (see fig. 5, par. 0077, wherein the virtual scene 501 also shows a game that the user 535 is playing during the ride. For purpose of illustration, the game is a racing game that includes a racing car 505, other cars 506 (individually referred to as “car 506”), a driver 507 in the racing car 505, and a flag set 508 that indicates an end of the racing. The racing car 505 may be a virtual representation of the AV 510. The driver 507 may be a virtual representation of the user 535. In some embodiments, the game scene is generated based on the ride in the real-world environment 500. For instance, the flag set 508, which can trigger the racing car 505 to stop in the game, may be generated based on the presence of the stop sign 540, which triggers the AV 510 to stop in the real-world; as taught by DONDERICI).
DONDERICI does not expressly teach an in-flight entertainment and communication (IFEC) system, at least in part via seatback display devices located within the commercial passenger vehicle, from a plurality of virtual avatars associated with the first passenger, the first virtual avatar being configured to convey a number of personal attributes of the first passenger according to an anonymity level indicated by the first passenger; perform a security processing of the state data at least with respect to the virtual interaction and the personal attributes captured within the state data.
In similar field of endeavor, O’BRIEN teaches an in-flight entertainment and communication (IFEC) system, at least in part via seatback display devices located within the commercial passenger vehicle (see figs. 3-8, par. 0050, wherein as in block 550, the onboard server may send the electronic offering item to a client device to be displayed via an inflight entertainment system. A graphical interface of the inflight entertainment system displayed at the client device enables the passenger on the aircraft to interact with the electronic offering item (e.g., view information for a product and/or service, and purchase the product and/or service). In one example, a connection between the onboard server and the client device may be a wireless connection established via a wireless access point onboard the aircraft. In another example, the client device may be a seatback system on the aircraft, and the connection between the onboard server and the seatback system may be a wired or wireless connection; see also pars. 0057-0059 and 0066-0067; as taught by O’BRIEN).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI apparatus to include the teachings of O’BRIEN for an in-flight entertainment and communication (IFEC) system, at least in part via seatback display devices located within the commercial passenger vehicle. Such a person would have been motivated to make this combination as it is beneficial to be able to use the seatback display device for inflight entertainment so that the user’s handheld device could be used for other purposes (see also O’BRIEN, par. 0002).
DONDERICI and O’BRIEN do not expressly teach from a plurality of virtual avatars associated with the first passenger, the first virtual avatar being configured to convey a number of personal attributes of the first passenger according to an anonymity level indicated by the first passenger; perform a security processing of the state data at least with respect to the virtual interaction and the personal attributes captured within the state data.
In similar field of endeavor, MIRANDA teaches from a plurality of virtual avatars associated with the first passenger, the first virtual avatar being configured to convey a number of personal attributes of the first passenger according to an anonymity level indicated by the first passenger (see col. 2, ll. 6-34, wherein FIG. 1 illustrates an example embodiment of a system for providing a pseudonymous browsing mode. The method 100 includes receiving, by a processor of a computer, input from a user requesting a level of anonymity for a session on an application or website, at operation 105. In various embodiments, the requested level of anonymity is between open browsing and completely incognito browsing. In various embodiments, the requested level of anonymity is closer to open browsing but partially incognito. In various embodiments, the requested level of anonymity is closer to incognito browsing but partially open. In some embodiments, the requested level of anonymity is open browsing. In some embodiments, the requested level of anonymity is completely incognito browsing. In these examples, the avatars provide the anonymity. In some examples, a classification of avatars is used including a preconfigured avatar for specific interactions. In various examples, an avatar may be used for a single instance and then may access dynamic open synthetic data that is closest to the user to create an avatar for interaction, where this avatar could be available for reuse in future. At operation 110, the processor programs an avatar configured to provide the requested level of anonymity to an identity of the user (such as previously-generated data identifying the user) and data generated by the user based on the received input. At operation 115, the processor uses the avatar to control an amount of data shared by the user with the application or website, to provide the requested level of anonymity to an identity of the user and data generated by the user; as taught by MIRANDA);
perform a security processing of the state data at least with respect to the virtual interaction and the personal attributes captured within the state data (see col. 3, ll. 25-39, wherein the avatar includes a profile, or digital certificate, on the client side that is adaptive such that the browser selects and saves the specific profile based on which application or website the user is interfacing, in various embodiments. The avatar determines whether cookies of the user get stored during interaction with the application or website, in various embodiments. In some embodiments, the avatar protects user configurations and provides confidentiality to the user during one or more browsing sessions. The present subject matter provides the ability to switch between levels of privacy for browsing with respect to specific data, as well as with respect to specific sites or applications. In some embodiments, the avatar can track what data of a user is exposed, to provide warnings and/or to identify potential monetization opportunities; as taught by MIRANDA).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI and O’BRIEN apparatus to include the teachings of MIRANDA from a plurality of virtual avatars associated with the first passenger, the first virtual avatar being configured to convey a number of personal attributes of the first passenger according to an anonymity level indicated by the first passenger; perform a security processing of the state data at least with respect to the virtual interaction and the personal attributes captured within the state data. Such a person would have been motivated to make this combination as a user of online services may encounter situations in which it would be desirable to share more or less data generated or used by the user (MIRANDA, col. 1, ll. 12-15).
As to claim 4, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 1. MIRANDA further teaches wherein the instructions further cause the IFEC system to: based on analyzing the virtual interaction, query the first passenger to use a different virtual avatar from the plurality of virtual avatars, the different virtual avatar being configured to convey a fewer number of personal attributes based on being associated with a higher anonymity level relative to the first virtual avatar (see col. 4, ll. 19-31, wherein instructions that when executed by computers, cause the computers to perform operations of receiving input from a user requesting a level of anonymity for a session on an application or website, wherein the requested level of anonymity is between open browsing and completely incognito browsing, and programming an avatar configured to provide the requested level of anonymity to an identity of the user and data generated by the user based on the received input. Further operations include using the avatar to control an amount of data shared by the user with the application or website to provide the requested level of anonymity to an identity of the user and data generated by the user; see also col. 2, ll. 1-5, and col. 3, ll. 14-25; as taught by MIRANDA).
As to claim 6, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 1. DONDERICI further teaches wherein the virtual interaction comprises the first passenger and the second passenger participating in a movie experience in which a mutually-selected movie is modified to include the first virtual avatar and the second virtual avatar with visual content of the mutually-selected movie (see par. 0070, wherein the avatar of the passenger can interact with the avatar of the person in the same virtual environment. They can talk to each other, send messages to each other, play a game together, work together, and so on. In some embodiments, the interaction may represent a real-world interaction; see also par 0090, wherein the virtual environment manager 450 may also generate an augmentation object. The augmentation object represents a class of objects that is absent in the real-world environment, and the virtual scene includes the augmentation object. The interaction may include a conversation between the user and the person. The virtual environment manager 450 may generate the augmentation object based on information in the conversation. For instance, the virtual environment manager 450 may generate audio (e.g., music) or video (e.g., movie) based on a discussion of the audio or video in the conversation. In some embodiments, the virtual environment manager 450 identifies a location based on the interaction and modifies a navigation route of the AV based on the location. In other embodiments, the virtual environment manager 450 can modify a setting of a component of the AV based on the interaction; as taught by DONDERICI).
As to claim 8, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 1. O’BRIEN further teaches further comprising the seatback display devices, each seatback display device being uniquely associated with a different passenger of the commercial passenger vehicle, wherein the shared virtual environment is visually provided to the first passenger of the commercial passenger vehicle via both a corresponding seatback display device and one or more personal electronic devices operated by the first passenger, and wherein the instructions further cause the IFEC system to: receive the anonymity level indicated by the first passenger via one of the corresponding seatback display device or the one or more personal electronic devices (see figs. 2-7, par. 0057, wherein the aircraft 750 may include a communication system 770 to facilitate bidirectional communication with the satellite 720 via the communication link 714. The communication system 770 may include an antenna 772 to receive a downlink signal from the satellite 720 and transmit an uplink signal to the satellite 720 via the communication link 714. The aircraft 750 may include a transceiver 774 in communication with the antenna 772, a modem 776 in communication with the transceiver 774, a network access unit 778 (e.g., a router) in communication with the modem 776, and a wireless access point (WAP) 780 in communication with the network access unit 778. Further, the WAP 780 may communicate with one or more client devices in the aircraft 750, such as seatback systems 785 and/or client devices 790 (e.g., mobile phones, tablets, laptops) in the aircraft 750. Therefore, the communication system 770 may receive a downlink signal from the satellite 720 and forward the downlink signal to the client devices and receive an uplink signal from the client devices and forward the uplink signal to the satellite 720, thereby supporting two-way data communications between the client devices within the aircraft 750 and the satellite 720; as taught by O’BRIEN).
As to claim 9, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 8.
DONDERICI further teaches wherein the one or more personal electronic devices comprises a virtual reality head-mounted display device (see figs. 1-5, par. 0074, wherein the user 535 is a passenger in the AV 510 and sits on a seat 515 in the AV 510. The user 535 is wearing a headset 530, which can display VR, AR, or MR scenes to the user 535. The headset 530 may be an embodiment of a client device 130; as taught by DONDERICI).
O’BRIEN further teaches that is communicably coupled with the corresponding seatback display device (see par. 0023, wherein a passenger can browse the various destination-specific categorized groupings of electronic offering items 118 using the client device 120. The client device 120 may comprise, for example, processor-based systems. More specifically, the client device 120 may be a personal electronic device, such as, but not limited to, a mobile phone, a laptop or notebook computer, a tablet computer, a handheld computer or mobile computing device, or any other device with similar capabilities. The client device 120 may be a personal electronic device or may be property of an aircraft operator. In another example, the client device 120 may be a seatback system installed on a passenger seat of the aircraft 100; see also par. 0057; as taught by O’BRIEN).
Claim 12 amounts to the method performed by the system of claim 1. Accordingly, claims 12 is rejected for substantially the same reasons as presented above for claim 1 and based on the references’ disclosure of the necessary supporting hardware and software.
Claims 2, 10-11, 13, 16 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over DONDERICI (US20230386138A1) in view of O’BRIEN et al. (US20240354807A1) and further in view of MIRANDA et al. (US11416635B1) and further in view of LAKE-SCHAAL et al. (US20240335738A1) [hereinafter LAKE].
As to claim 2, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 1. DONDERICI further teaches and indicate, to each of the first passenger and the second passenger, an in-vehicle location for each of the first passenger and the second passenger via the at least one interactable virtual feature related to the interior layout of the commercial passenger vehicle (see par. 0059, wherein the sensor interface 420 interfaces with the sensors in the sensor suite 140. The sensor interface 420 may request data from the sensor suite 140, e.g., by requesting that a sensor capture data in a particular direction or at a particular time. For example, in response to the perception module 430 or another module determining that a user is in a particular seat in the AV 110 (e.g., based on images from an interior sensor 240, a weight sensor, or other sensors), the sensor interface 420 instructs the interior sensor 240 to capture sensor data of the user; as taught by DONDERICI).
DONDERICI, O’BRIEN and MIRANDA do not expressly teach wherein the instructions further cause the IFEC system to: receive, at a time during or subsequent to the virtual interaction, a request for a post-journey communication channel between the first passenger and the second passenger; provide, in response to the request, the post-journey communication channel based on transmitting, to a personal electronic device operated by one of the first passenger or the second passenger, contact information for another one of the first passenger or the second passenger.
In similar field of endeavor, LAKE teaches wherein the instructions further cause the IFEC system to: receive, at a time during or subsequent to the virtual interaction, a request for a post-journey communication channel between the first passenger and the second passenger; provide, in response to the request, the post-journey communication channel based on transmitting, to a personal electronic device operated by one of the first passenger or the second passenger, contact information for another one of the first passenger or the second passenger (see figs 9-12, par. 0133, wherein the interactive media configuration operation 900 may include supporting continuity of play or execution of interactive media title between different vehicles, platforms, and or rides. For example, suppose the group of passengers initially receives an interactive media content (e.g., the Harry Potter movie) for enjoyment inside a ride share (such as Uber, etc.) on vehicle 106 on his way to a first destination (e.g., the airport). Once arriving at the first destination (e.g., the airport) or upon receiving an input from the group of passengers to pause the play of the interactive media content, the processor(s) executing the selecting/configuring operation 930 may pause the interactive media content. The position of the pause in the interactive media content may be stored as part of the passenger profile data 1010 as metadata or the like. Then, once the group of passengers boards another vehicle 108 (e.g., an airplane for the passenger's flight) or upon specifically requesting resuming the interactive media content on a different vehicle 108 (or a different aesthetic output device 300), the interactive media content may be seamlessly resumed to support continuity of the interactive media content; see also par. 0134; as taught by LAKE).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI, O’BRIEN and MIRANDA apparatus to include the teachings of LAKE wherein the instructions further cause the IFEC system to: receive, at a time during or subsequent to the virtual interaction, a request for a post-journey communication channel between the first passenger and the second passenger; provide, in response to the request, the post-journey communication channel based on transmitting, to a personal electronic device operated by one of the first passenger or the second passenger, contact information for another one of the first passenger or the second passenger. Such a person would have been motivated to make this combination as it is beneficial for the users who would like to continue communication or the game that they were playing (see also LAKE, par. 0005).
As to claim 10, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 1. DONDERICI, O’BRIEN and MIRANDA do not expressly teach wherein the security processing includes deleting or purging the state data.
In similar field of endeavor, LAKE teaches wherein the security processing includes deleting or purging the state data (see figs 9-15, par. 0176, wherein although not shown in FIG. 13C, the augmentation process 1338 may use the most recent copies of the data structures 350, 1350 available. For example if an intermediate location or visible place aspect of the existing trip data is deleted or changed as a result of the new trip data or trip event, the server may remove components picked for a deleted location or visible place, add components picked for the new location or visible place, and if a preference data is deleted or changed via passenger input, the server may remove components picked for a deleted preference, add components picked for the new preference; the server may then adjust components based on changes in estimated durations; as taught by LAKE).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI, O’BRIEN and MIRANDA apparatus to include the teachings of LAKE wherein the security processing includes deleting or purging the state data. Such a person would have been motivated to make this combination as it is beneficial for the users to delete data that is deemed unnecessary so that there is less confusion in the future (see also LAKE, par. 0005).
As to claim 11, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 1. DONDERICI, O’BRIEN and MIRANDA do not expressly teach wherein the security processing includes flagging the state data based on the state data capturing at least one of the virtual interaction or the personal attributes, and securely transferring the state data to a ledger database accessible by other mobility providers.
In similar field of endeavor, LAKE teaches wherein the security processing includes flagging the state data based on the state data capturing at least one of the virtual interaction or the personal attributes, and securely transferring the state data to a ledger database accessible by other mobility providers (see figs 3B-4C, par. 0102, wherein FIG. 3B is a schematic block diagram illustrating aspects of a system and apparatus for configuring interactive media customized for one or more passengers in a vehicle or in relation to a vehicular trip, which may involve controlling output responsive to, trip data, profile data, and/or sensor data indicating one or more passengers' emotional state(s). In an aspect, a group of passengers 330 is sharing a common conveyance 106. In one example, the group of passengers 330 includes passenger A, passenger, B, and up to n number of passengers that may fit the common conveyance 106. In another aspect, a passenger is traveling alone in a vehicle 106. Each passenger provides a respective profile data and trip data, which may be provided to the processor upon the vehicular travel. One or more passengers may be equipped with a biometric sensor 328, such as a smartphone, smart watch, or other biometric devices that provide biometric feedback information about the passenger. The vehicle 106 may be equipped with internal or external cameras 328 b that may capture and provide audio-video information about one or more places, objects, etc., within sight from the common conveyance, or passenger(s) inside the vehicle 106. In an aspect, the vehicle 106 is equipped with one or more biometric sensors 328 c that provide biometric feedback information about the passenger(s) onboard the vehicle 106. Vehicle 106 may also be equipped with a geolocation sensor 328 d that provides geolocation and trip information of the vehicle 106; see also par. 0078, wherein data collected from client devices or users may include, for example, passenger (user) identity, passenger profile (user profile data), sensor data and application data. Passenger identity, passenger profile, and sensor data may be collected by a background (not user-facing) application operating on the client device, and transmitted to a data sink, for example, a cloud-based data server 122 or discrete data server 126. Application data means application state data, including but not limited to records of user interactions with an application or other application inputs, outputs or internal states. Applications may include software for control of cinematic content and supporting functions. Applications and data may be served to one or more system nodes including vehicles 101 (e.g., vehicle or common conveyance 106 through vehicle 120) from one or more of the foregoing servers (e.g., 122, 124, 126) or other types of servers, for example, any server accessing a distributed blockchain data structure 128, or a peer-to-peer (P2P) server 116 including a peer-to-peer network such as a mesh network (including partial, full, and wireless mesh networks), such as may be provided by a set of vehicle devices 118, 120, etc., and the like, operating contemporaneously as micro-servers or clients; as taught by LAKE).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI, O’BRIEN and MIRANDA apparatus to include the teachings of LAKE wherein the security processing includes flagging the state data based on the state data capturing at least one of the virtual interaction or the personal attributes, and securely transferring the state data to a ledger database accessible by other mobility providers. Such a person would have been motivated to make this combination as it is beneficial for the users to store the data that is deemed necessary for future interactions in a database that can be accessed by other outside providers in case the users are connecting to other mobile network providers (see also LAKE, par. 0005).
As to claim 13, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 12. DONDERICI, O’BRIEN and MIRANDA do not expressly identifying, using the profile data associated with the first passenger, a recommended passenger to meet via the shared virtual environment; and indicating, within the shared virtual environment via one of a seatback monitor or a personal electronic device operated by the first passenger, the recommended passenger.
In similar field of endeavor, LAKE teaches identifying, using the profile data associated with the first passenger, a recommended passenger to meet via the shared virtual environment; and indicating, within the shared virtual environment via one of a seatback monitor or a personal electronic device operated by the first passenger, the recommended passenger (see figs. 2-10 and 20, par. 0203, wherein FIG. 20 is a flow chart showing an algorithm for inviting more passenger(s) to the common conveyance according to one or more embodiments. The invitation process C begins at 2000. At 2010, the one or more processors obtain preference data for one or more passengers from the database of profile data 700 for the passengers in a manner described with respect to the process 1435 above. At 2020, the one or more processors optimize matches in preferences of the one or more passengers from a pool of passengers. In an aspect, the one or more processors may a rule-based algorithm, a predictive analytics (AI) algorithm (interactive media content AI), or a combination of both, to optimize the matches. For example, suppose Passenger A requesting a common conveyance from Santa Monica to Burbank likes to play karaoke game, likes the Beach Boys, and also likes to play a social behavior rewarding game. Similarly, Passenger D requesting a common conveyance from Downtown Los Angeles to Burbank likes to play karaoke game and likes the Beach Boys, but does not like to play social behavior rewarding game. Passenger E requesting a common conveyance from Santa Monica to Sherman Oaks likes to play a social behavior rewarding game. The preferred trip times for all the passengers coincide. At the process 2030, the method 2000 may optimize the matches, and as a result, identifies Passenger E as the matching passenger. In this example, the one or more processors gave more weight to the geographic factors (Santa Monica vs. Downtown Los Angeles) and shared affinities (social behavior). At the process 2040, the one or more processors invite the matching passenger E to join the common conveyance taken by Passenger A. At 2050, passenger E joins the common conveyance, for example, by affirmatively responding to the invitation from the processor; as taught by LAKE).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI, O’BRIEN and MIRANDA apparatus to include the teachings of LAKE identifying, using the profile data associated with the first passenger, a recommended passenger to meet via the shared virtual environment; and indicating, within the shared virtual environment via one of a seatback monitor or a personal electronic device operated by the first passenger, the recommended passenger. Such a person would have been motivated to make this combination as it is advantageous for passengers to be matched with other passengers who share the same interests (see also LAKE, par. 0005).
Claim 16 amounts to the method performed by the system of claim 2. Accordingly, claims 16 is rejected for substantially the same reasons as presented above for claim 2 and based on the references’ disclosure of the necessary supporting hardware and software.
Claim 21 amounts to the non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor of an in-vehicle system, cause the in-vehicle system to implement a method performed by the system of claim 2. Accordingly, claims 21 is rejected for substantially the same reasons as presented above for claim 2 and based on the references’ disclosure of the necessary supporting hardware and software.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over DONDERICI (US20230386138A1) in view of O’BRIEN et al. (US20240354807A1) and further in view of MIRANDA et al. (US11416635B1) and further in view of BRUNET DE COURSSOU et al. (US20080214310A1) [hereinafter BRUNET].
As to claim 5, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 1. DONDERICI, O’BRIEN and MIRANDA do not expressly teach wherein the instructions further cause the IFEC system to: subsequent to the commercial passenger vehicle concluding the journey, store, in profile data associated with the first passenger, an indication that the first virtual avatar was used during the journey, wherein the indication is configured to be provided to the first passenger prior to a second journey by the first passenger.
In similar field of endeavor, BRUNET teaches wherein the instructions further cause the IFEC system to: subsequent to the commercial passenger vehicle concluding the journey, store, in profile data associated with the first passenger, an indication that the first virtual avatar was used during the journey, wherein the indication is configured to be provided to the first passenger prior to a second journey by the first passenger (see figs. 1-10, par. 0028, wherein FIG. 2 shows an exemplary Player Profile Ticket, according to an embodiment of the present invention. During a gaming session, players may choose to have their current game status, player preferences, as well any other pertinent in game information (such as, for example, a player-selected avatar) stored on a Player Profile Ticket 202. This ticket may feature key information including: a heading describing its use as shown at 204, a statement regarding its cash value or lack thereof as shown at 206, a timestamp 208, and a bar code 210 or other machine readable code or indexing device to allow for information storage and retrieval. While Player Profile Tickets (examples of which are shown at FIGS. 2 and 4-10) represent one form of player profile storage device, although devices such as PIN based keypad systems and portable memory media (USB Flash drive, MP3 player memory, mobile phone memory, camera memory, media memory, XBOX player memory, PlayStation player memory, for example, secured by regulatory approved security means) may also be advantageously employed to store player profiles and other player information according to the inventions described herein; see also par. 0029; as taught by BRUNET).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI, O’BRIEN and MIRANDA apparatus to include the teachings of BRUNET wherein the instructions further cause the IFEC system to: subsequent to the commercial passenger vehicle concluding the journey, store, in profile data associated with the first passenger, an indication that the first virtual avatar was used during the journey, wherein the indication is configured to be provided to the first passenger prior to a second journey by the first passenger. Such a person would have been motivated to make this combination as it is beneficial for the users to keep their profile so that the avatars will relate to the completed session and the continuation of the session will present the same profiles so there is no confusion (see also BRUNET, pars. 0005-0010).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over DONDERICI (US20230386138A1) in view of O’BRIEN et al. (US20240354807A1) and further in view of MIRANDA et al. (US11416635B1) and further in view of SORYAL et al. (US20240160272A1).
As to claim 7, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 1. DONDERICI, O’BRIEN and MIRANDA do not expressly teach wherein the plurality of virtual avatars associated with the first passenger comprises a particular virtual avatar associated with a different shared virtual environment, and wherein the instructions further cause the IFEC system to: receive, from the first passenger, a request to modify attributes of the particular virtual avatar; and transmit, to a system operating the different shared virtual environment, an indication of the request to modify the particular virtual avatar.
In similar field of endeavor, SORYAL teaches wherein the plurality of virtual avatars associated with the first passenger comprises a particular virtual avatar associated with a different shared virtual environment, and wherein the instructions further cause the IFEC system to: receive, from the first passenger, a request to modify attributes of the particular virtual avatar; and transmit, to a system operating the different shared virtual environment, an indication of the request to modify the particular virtual avatar (see figs. 1-4, par. 0037, wherein the first avatar defined and/or otherwise operating in the first virtual environment may provide stimuli to change and/or otherwise modify some aspect of the second, independent virtual environment. Such modifications may include, without limitation, projecting into the second, independent virtual environment one or more of an image, a sound, a touch, a modification of a virtual object defined and/or existing within the second virtual environment, e.g., by changing its shape and/or repositioning it. For example, the first avatar defined and/or otherwise operating within the first virtual environment may speak an/or direct a gesture, e.g., a wave, to the second avatar defined and/or otherwise operating within the second, independent virtual environment. In at least some embodiments, the other avatar may observe the projection, e.g., the sound and/or gesture substantially synchronously with an action by the first avatar within the first virtual environment. Similarly, the first avatar defined and/or otherwise operating in the first environment may synchronously manipulate an object within the second environment by moving and/or changing a shape or configuration of the object. Such manipulations of the object within the second virtual environment may occur substantially synchronously with manipulations by the first avatar in the first virtual environment, such that the second avatar may observe the manipulations in real time; as taught by SORYAL).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI, O’BRIEN and MIRANDA apparatus to include the teachings of SORYAL wherein the plurality of virtual avatars associated with the first passenger comprises a particular virtual avatar associated with a different shared virtual environment, and wherein the instructions further cause the IFEC system to: receive, from the first passenger, a request to modify attributes of the particular virtual avatar; and transmit, to a system operating the different shared virtual environment, an indication of the request to modify the particular virtual avatar. Such a person would have been motivated to make this combination as it is beneficial for the users to be able to reflect the changes in one virtual environment into another virtual environment so that the user’s avatar is consistent with the changes that are done in the original environment.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over DONDERICI (US20230386138A1) in view of O’BRIEN et al. (US20240354807A1) and further in view of MIRANDA et al. (US11416635B1) and further in view of ISHII et al. (US20240144413A1).
As to claim 15, DONDERICI, O’BRIEN and MIRANDA teach the limitations of claim 12. DONDERICI, O’BRIEN and MIRANDA do not expressly teach wherein the anonymity level is one of a plurality of anonymity levels, at least one of which includes a given virtual avatar for a given passenger visually indicating a vehicle seat number of the given passenger.
In similar field of endeavor, ISHII teaches wherein the anonymity level is one of a plurality of anonymity levels, at least one of which includes a given virtual avatar for a given passenger visually indicating a vehicle seat number of the given passenger (see figs. 1-9, par. 0105, wherein the passenger information 140 shown in FIG. 7 shows the passenger list for the virtual passenger airplane (JBL527). Seat numbers are set for the seats of the virtual passenger airplane; see also par. 0114, wherein The boarding experience provision unit 222 transmits a three-dimensional image of the cabin space of the virtual passenger airplane. The user controls the user's own avatar to move to his/her seat and take a seat; as taught by ISHII).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI, O’BRIEN and MIRANDA apparatus to include the teachings of ISHII wherein the anonymity level is one of a plurality of anonymity levels, at least one of which includes a given virtual avatar for a given passenger visually indicating a vehicle seat number of the given passenger. Such a person would have been motivated to make this combination as it is beneficial to be able to identify where the user is sitting so that the navigation to the place will be easier.
Claims 17-18, 20 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over DONDERICI (US20230386138A1) in view and further in view of MIRANDA et al. (US11416635B1) and further in view of LAKE-SCHAAL et al. (US20240335738A1) [hereinafter LAKE].
As to claim 17, DONDERICI teaches a non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor of an in-vehicle system, cause the in-vehicle system to implement a method comprising: subsequent to a commercial passenger vehicle departing on a journey, generating an in-flight virtual environment that is accessible by passengers of the commercial passenger vehicle (see figs. 1-7, par. 0109, wherein Example 18 depicts a computer system, including a computer processor for executing computer program instructions; and one or more non-transitory computer-readable media storing computer program instructions executable by the computer processor to perform operations including: providing, to a user, a ride in a real-world environment through a vehicle, the user associated with a client device; generating a virtual scene based on the ride of the user in the vehicle, the virtual scene including a virtual representation of the user or the vehicle; and sending the virtual scene to the client device, the client device configured to display the virtual scene to the user; see also par. 0015; as taught by DONDERICI),
the in-flight virtual environment being synchronized from a ground-based platform that provides a shared virtual environment (see fig. 3, par. 0043, wherein FIG. 3 is a block diagram showing the fleet management system according to some embodiments of the present disclosure. The fleet management system 120 includes a client device interface 310, various data stores 340-460, and a vehicle manager 370. The client device interface 310 includes a ride request interface 320 and user setting interface 330. The data stores include user ride datastore 340, map datastore 350, and user interest datastore 360. The vehicle manager 370 includes a vehicle dispatcher 380 and an AV interface 390. In alternative configurations, different and/or additional components may be included in the fleet management system 120. Further, functionality attributed to one component of the fleet management system 120 may be accomplished by a different component included in the fleet management system 120 or a different system than those illustrated; as taught by DONDERICI);
selecting a first virtual avatar from a plurality of virtual avatars associated with the first passenger to represent the first passenger within the in-flight virtual environment (see par. 0014, wherein a user, such as a passenger of an AV, may engage in the virtual environment via a virtual representation (e.g., avatars) of himself or herself; see also par. 0068, wherein the virtual environment manager 450 generates a virtual scene that simulates the AV 110 and one or more passengers in the AV 110, and can place the virtual scene into a virtual environment that includes other simulated AVs and avatars of other people; as taught by DONDERICI),
facilitating, within the shared virtual environment, a virtual interaction between the first virtual avatar representing the first passenger and a second virtual avatar representing a second passenger of the commercial passenger vehicle (see par. 0070, wherein the passenger can interact with other people through the virtual environment. For instance, the passenger and another person can access the same virtual environment, which includes virtual representations of both. The avatar of the passenger can interact with the avatar of the person in the same virtual environment. They can talk to each other, send messages to each other, play a game together, work together, and so on. In some embodiments, the interaction may represent a real-world interaction. For instance, the virtual environment manager 450 detects an interaction between the passenger and another person, e.g., based on sensor data. The virtual environment manager 450 projects the interaction into the virtual environment. The virtual environment manager 450 may generate one or more messages based on the detected interaction and include the messages in the virtual environment. The passenger and person may continue their interaction through the virtual environment; see also par. 0052-0053, 0071 and 0088-0090; as taught by DONDERICI).
DONDERICI does not expressly teach the first virtual avatar being configured to represent the first passenger according to an anonymity level; and subsequent to re-establishing connection to a ground-based server system at an end of the journey, declining to synchronize the virtual interaction with the ground-based server system, such that the ground-based server system has no record of the virtual interaction when providing the shared virtual environment.
In similar field of endeavor, MIRANDA teaches the first virtual avatar being configured to represent the first passenger according to an anonymity level (see col. 2, ll. 6-34, wherein FIG. 1 illustrates an example embodiment of a system for providing a pseudonymous browsing mode. The method 100 includes receiving, by a processor of a computer, input from a user requesting a level of anonymity for a session on an application or website, at operation 105. In various embodiments, the requested level of anonymity is between open browsing and completely incognito browsing. In various embodiments, the requested level of anonymity is closer to open browsing but partially incognito. In various embodiments, the requested level of anonymity is closer to incognito browsing but partially open. In some embodiments, the requested level of anonymity is open browsing. In some embodiments, the requested level of anonymity is completely incognito browsing. In these examples, the avatars provide the anonymity. In some examples, a classification of avatars is used including a preconfigured avatar for specific interactions. In various examples, an avatar may be used for a single instance and then may access dynamic open synthetic data that is closest to the user to create an avatar for interaction, where this avatar could be available for reuse in future. At operation 110, the processor programs an avatar configured to provide the requested level of anonymity to an identity of the user (such as previously-generated data identifying the user) and data generated by the user based on the received input. At operation 115, the processor uses the avatar to control an amount of data shared by the user with the application or website, to provide the requested level of anonymity to an identity of the user and data generated by the user; as taught by MIRANDA).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI apparatus to include the teachings of MIRANDA the first virtual avatar being configured to represent the first passenger according to an anonymity level. Such a person would have been motivated to make this combination as a user of online services may encounter situations in which it would be desirable to share more or less data generated or used by the user (MIRANDA, col. 1, ll. 12-15).
DONDERICI and MIRANDA do not expressly teach and subsequent to re-establishing connection to a ground-based server system at an end of the journey, declining to synchronize the virtual interaction with the ground-based server system, such that the ground-based server system has no record of the virtual interaction when providing the shared virtual environment.
In similar field of endeavor, LAKE teaches and subsequent to re-establishing connection to a ground-based server system at an end of the journey, declining to synchronize the virtual interaction with the ground-based server system, such that the ground-based server system has no record of the virtual interaction when providing the shared virtual environment (see figs 9-15, par. 0176, wherein although not shown in FIG. 13C, the augmentation process 1338 may use the most recent copies of the data structures 350, 1350 available. For example if an intermediate location or visible place aspect of the existing trip data is deleted or changed as a result of the new trip data or trip event, the server may remove components picked for a deleted location or visible place, add components picked for the new location or visible place, and if a preference data is deleted or changed via passenger input, the server may remove components picked for a deleted preference, add components picked for the new preference; the server may then adjust components based on changes in estimated durations; see also par. 0177; as taught by LAKE).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI and MIRANDA apparatus to include the teachings of LAKE wherein subsequent to re-establishing connection to a ground-based server system at an end of the journey, declining to synchronize the virtual interaction with the ground-based server system, such that the ground-based server system has no record of the virtual interaction when providing the shared virtual environment. Such a person would have been motivated to make this combination as it is beneficial for the users to delete data that is deemed unnecessary so that there is less confusion in the future (see also LAKE, par. 0005).
As to claim 18, DONDERICI, MIRANDA and LAKE teach the limitations of claim 17. MIRANDA further teaches wherein the method implemented by the in-vehicle system further comprising: evaluating a respective anonymity level that characterizes each of the plurality of virtual avatars based on profile data associated with the first passenger (see fig. 1, col. 3, ll. 25-39, wherein the avatar includes a profile, or digital certificate, on the client side that is adaptive such that the browser selects and saves the specific profile based on which application or website the user is interfacing, in various embodiments. The avatar determines whether cookies of the user get stored during interaction with the application or website, in various embodiments. In some embodiments, the avatar protects user configurations and provides confidentiality to the user during one or more browsing sessions. The present subject matter provides the ability to switch between levels of privacy for browsing with respect to specific data, as well as with respect to specific sites or applications. In some embodiments, the avatar can track what data of a user is exposed, to provide warnings and/or to identify potential monetization opportunities; as taught by MIRANDA),
wherein the first virtual avatar is selected based on the respective anonymity level characterizing the first virtual avatar satisfying the anonymity level indicated by the first passenger (see col. 4, ll. 19-31, wherein instructions that when executed by computers, cause the computers to perform operations of receiving input from a user requesting a level of anonymity for a session on an application or website, wherein the requested level of anonymity is between open browsing and completely incognito browsing, and programming an avatar configured to provide the requested level of anonymity to an identity of the user and data generated by the user based on the received input. Further operations include using the avatar to control an amount of data shared by the user with the application or website to provide the requested level of anonymity to an identity of the user and data generated by the user; see also col. 2, ll. 1-5, and col. 3, ll. 14-25; as taught by MIRANDA).
As to claim 20, DONDERICI, MIRANDA and LAKE teach the limitations of claim 17. LAKE further teaches wherein the shared virtual environment is generated to be further accessible by passengers of at least one other commercial passenger vehicle, the at least one other commercial passenger vehicle and the commercial passenger vehicle being operated by a common mobility provider (see figs 9-12, par. 0133, wherein the interactive media configuration operation 900 may include supporting continuity of play or execution of interactive media title between different vehicles, platforms, and or rides. For example, suppose the group of passengers initially receives an interactive media content (e.g., the Harry Potter movie) for enjoyment inside a ride share (such as Uber, etc.) on vehicle 106 on his way to a first destination (e.g., the airport). Once arriving at the first destination (e.g., the airport) or upon receiving an input from the group of passengers to pause the play of the interactive media content, the processor(s) executing the selecting/configuring operation 930 may pause the interactive media content. The position of the pause in the interactive media content may be stored as part of the passenger profile data 1010 as metadata or the like. Then, once the group of passengers boards another vehicle 108 (e.g., an airplane for the passenger's flight) or upon specifically requesting resuming the interactive media content on a different vehicle 108 (or a different aesthetic output device 300), the interactive media content may be seamlessly resumed to support continuity of the interactive media content; see also par. 0134; as taught by LAKE).
As to claim 22, DONDERICI, MIRANDA and LAKE teach the limitations of claim 17. MIRANDA further teaches wherein the method implemented by the in-vehicle system further comprises: based on analyzing the virtual interaction, query the first passenger to use a different virtual avatar from the plurality of virtual avatars, the different virtual avatar being configured to convey a fewer number of personal attributes based on being associated with a higher anonymity level relative to the first virtual avatar (see col. 4, ll. 19-31, wherein instructions that when executed by computers, cause the computers to perform operations of receiving input from a user requesting a level of anonymity for a session on an application or website, wherein the requested level of anonymity is between open browsing and completely incognito browsing, and programming an avatar configured to provide the requested level of anonymity to an identity of the user and data generated by the user based on the received input. Further operations include using the avatar to control an amount of data shared by the user with the application or website to provide the requested level of anonymity to an identity of the user and data generated by the user; see also col. 2, ll. 1-5, and col. 3, ll. 14-25; as taught by MIRANDA).
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over DONDERICI (US20230386138A1) in view and further in view of MIRANDA et al. (US11416635B1) and further in view of LAKE-SCHAAL et al. (US20240335738A1) [hereinafter LAKE] and further in view of JACKSON et al. (US20210012676A1).
As to claim 19, DONDERICI, MIRANDA and LAKE teach the limitations of claim 17. DONDERICI, MIRANDA and LAKE do not expressly teach wherein the in-flight virtual environment comprises a bulletin board via which the first passenger provides an invitation for virtual interactions with other passengers of the commercial passenger vehicle, wherein the virtual interaction is facilitated based on the second passenger viewing the invitation by the first passenger via the bulletin board.
In similar field of endeavor, JACKSON teaches wherein the in-flight virtual environment comprises a bulletin board via which the first passenger provides an invitation for virtual interactions with other passengers of the commercial passenger vehicle, wherein the virtual interaction is facilitated based on the second passenger viewing the invitation by the first passenger via the bulletin board (see figs. 21-25, par. 0124, wherein Facial detection and eye detection allow for creation of an avatar in a user's virtual environment. The avatar can communicate with other users within the same virtual environment through message boards and forums about content, opinions, and other relevant topics. In some embodiments, facial features are only used for creation of an avatar. Users can provide consent and approval prior to initiation of facial detection. Immediately upon avatar creation, the facial features can be garbage collected for security purposes; see also par. 0117; as taught by JACKSON).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the DONDERICI, MIRANDA and LAKE apparatus to include the teachings of JACKSON wherein the in-flight virtual environment comprises a bulletin board via which the first passenger provides an invitation for virtual interactions with other passengers of the commercial passenger vehicle, wherein the virtual interaction is facilitated based on the second passenger viewing the invitation by the first passenger via the bulletin board. Such a person would have been motivated to make this combination as it is beneficial for the users to be able to share ideas through a common tool and ask for help or ideas if they are not sure who could be approached for the expert assistance on a subject (see also JACKSON, par. 0003).
Allowable Subject Matter
Claims 3 and 14 are objected to as being dependent upon a rejected base claims, but would be allowable if rewritten in independent form including all of the limitations of the base claims and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Publication Number
Filing Date
Title
US20180204370A1
2018-01-18
Passenger Avatar for Display on Shared Vehicles
US20170247000A1
2017-01-06
User interface and virtual personality presentation based on user profile
US20100217458A1
2008-06-25
Interactive information system for an airplane
US20100161456A1
2008-12-22
Sharing virtual space in a virtual universe
US20240031346A1
2022-07-19
Managing virtual data objects in virtual environments
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KOOROSH NEHCHIRI whose telephone number is (408)918-7643. The examiner can normally be reached M-F, 9-5 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L. Bashore can be reached on 571-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KOOROSH NEHCHIRI/Examiner, Art Unit 2174
/WILLIAM L BASHORE/ Supervisory Patent Examiner, Art Unit 2174