DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/4/2024; 2/19/2025; 5/22/2025 and 7/11/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Election/Restrictions
The examiner reserves the rights to prepare an election/restrictions should the independent claims be amended or additional claims be added that make the claim(s) independent or distinct, each from the other.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-5 and 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels et al. (US 2017/0243403), hereinafter Daniels in view of Tucker et al. (US 2023/0199148), hereinafter Tucker.
As for claim 1, Daniels teaches a system (Fig. 1; paragraph [0050] describes a system), wherein the system comprising:
a first device (paragraphs [0061] and [0064] describe computer devices (130 and 140) of onsite and offsite augmented reality (AR) users, an onsite user in proximity to a particular physical location can carry the computer device 103, the onsite users use respective mobile digital devices (MDD)); configured to:
receive a first operation (paragraph [0071] describes a MDD creates augmented reality (AR) data or content, tethered to a specific location in the AR world, based on user instructions it receives);
determine a first location in a physical space (paragraphs [0065] and [0071] describe a mobile digital device (MDD) of an onsite user acquires real-world positioning data using different techniques including GPS and other data about physical position:); and
display first content in response to the first operation (paragraph [0071] describes the MDD creates AR data, tethered to a specific location in the AR world. The specific location of the AR data is identified with respect to environmental information within the LockAR data set; paragraph [0169] describes the LockAR is used to position AR content for creation and viewing against the backdrop of the real world. When viewing areas of the device and the AR content intersect, the device begins to display the AR content); and
a second device comprising a camera and configured to (paragraphs [0046]-[0047] describe mobile devices include a variety of on-board sensors that enable the mobile device to obtain measurements of the surrounding real-world environment, these sensors includes a camera):
obtain the first content (paragraph [0071] describes an offsite digital device (OSDD) receives the AR content and the LockAR data specifying the location of the AR content; paragraphs [0080] and [0086] describe onsite device A1 sends AR content to the other devices);
capture an image using the camera (paragraph [0048] describes the mobile device uses sensor measurements to determine a positioning of the mobile device within the real-world environment; paragraph [0084] describes on-site devices A1 to AN create AR versions of the real-world location based on the live views of the location they capture); and
display the first content in a superimposing manner on the image so that a display location of the first content overlaps the first location (paragraph [0071] describes the AR application of the OSDD places the received AR content within an offsite, simulated, virtual background/environment. A user2 sees an offsite virtual augmented reality, which is a completely VR experience substantially resembling the augmented reality seen by User 1 onsite; paragraphs [0088]-[0091] describe an offsite user selects a piece of AR content to view, or a geographic location to view AR content from. The ovAR application renders the AR content and background environment based on the information it receives, and updates the rendering as the ovAR application continues to receive information).
Daniels fails to teach
wherein prepare first content includes display first content;
wherein a content is displayed in a superimposing manner.
Tucker discloses
wherein prepare first content includes display first content (paragraphs [0061] and [0066] describe a conference participant shares slides or another digital document via their own personal device, the slides of a slide presentation or other shared digital document is being presented on a TV display);
wherein a content is displayed in a superimposing manner (paragraphs [0076]- [0079] describe a first device receives a camera feed of a meeting or separate video stream from a second device, the first device overlays on the camera feed as being or to be presented on a second display; paragraphs [0013]-[0015] describe the video feed may be superimposed for the video feed to appear, via the second display, as though the first display is presenting the video feed. The video may be superimposed to appear as though being presented from a location of the first display but at a different angular orientation than the first display itself).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Tucket for superimposing a camera feed or a video feed on a second display. The teachings of Tucker, when implemented in the Daniels system, will allow one of ordinary skill in the art to apply virtuality technique to participants of a meeting. One of ordinary skill in the art would be motivated to utilize the teachings of Tucker in the Daniels system in order to provide a virtual immersion experience to a meeting participants.
As for claim 3, the combined system of Daniels and Tucker teaches wherein after displaying the first content (Tucker: paragraph [0061] describe a slide of a slide presentation or other shared digital document is presented on a TV display; Daniels: paragraph [0065] describes the MDD prepares an onsite canvas for creating the AR event), the first device is further configured to:
receive a second operation for the first content (Tucker: paragraph [0061] describes the slide is being presented on a TV display also establishes a separate video feed that is to also be overlaid on a portion of a camera feed showing a field of view of a conference room location; Daniels: paragraph [0066] describes the MDD performs LockAR technique); and;
perform, in response to the second operation, a third operation for the first content (Tucker: paragraph [0061] describes the camera feed is being transmitted to a remote-located conference participant), and comprising switching, pausing, or starting a display picture of the first content (Daniels: paragraph [0068] describes the MDD sends editing invitations to the AR applications of offsite users); and
send information indicating the second operation (Daniels: paragraph [0069] describes the MDD sends site-specific data to the server which then propagates it to the OSDD of User 2; paragraph [0080] describes the user of onsite device invites friends to participate in a hot-edit event), and
wherein after displaying the first content (Daniels: paragraphs [0080]-[0081] describe the onsite device A1 sends AR content to the other devices via the cloud server. Onsite devices A1 to AN composite the AR content with live views of the location to create the augmented reality scene for their users. The user of onsite device A1 creates a piece of AR content, which is also displayed at other participating devices), the second device is further configured to:
receive the information from the first device (Daniels: paragraph [0080] describes the onsite device invites friends to participate in a hot-edit event); and
perform a fourth operation for the first content (Daniels: paragraph [0081] describes onsite device A2 edits the new AR content that was previously edited by onsite device A1), wherein the fourth operation comprises switching, ending, pausing, or starting a display picture of the first content (Daniels: paragraph [0081] describes the onsite device A2 changes the AR content that was previously edited by onsite device A1; Tucker: paragraph [0080] describes the first device determines whether a selector has been selected to switch back to presenting the camera feed itself without separate video overlay).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Tucket for editing a camera feed or a video feed on a second display. The teachings of Tucker, when implemented in the Daniels system, will allow one of ordinary skill in the art to apply virtuality technique to participants of a meeting. One of ordinary skill in the art would be motivated to utilize the teachings of Tucker in the Daniels system in order to provide a virtual immersion experience to a meeting participants.
As for claim 4, the combined system of Daniels and Tucker teaches wherein after displaying the first content (Tucker: paragraph [0061] describe a slide of a slide presentation or other shared digital document is presented on a TV display; Daniels: paragraph [0065] describes the MDD prepares an onsite canvas for creating the AR event, the first device is further configured to:
receive a second operation for the first content (Tucker: paragraph [0061] describes the slide is being presented on a TV display also establishes a separate video feed that is to also be overlaid on a portion of a camera feed showing a field of view of a conference room location; Daniels: paragraph [0066] describes the MDD performs LockAR technique); and;
perform, in response to the second operation, a third operation for the first content (Tucker: paragraph [0061] describes the camera feed is being transmitted to a remote-located conference participant), and comprising switching, pausing, or starting a display picture of the first content (Daniels: paragraph [0068] describes the MDD sends editing invitations to the AR applications of offsite users); and
send information indicating the second operation (Daniels: paragraph [0069] describes the MDD sends site-specific data to the server which then propagates it to the OSDD of User 2; paragraph [0080] describes the user of onsite device invites friends to participate in a hot-edit event), and
wherein after displaying the first content (Daniels: paragraphs [0080]-[0081] describe the onsite device A1 sends AR content to the other devices via the cloud server. Onsite devices A1 to AN composite the AR content with live views of the location to create the augmented reality scene for their users. The user of onsite device A1 creates a piece of AR content, which is also displayed at other participating devices), the second device is further configured to:
receive the information from the first device (Daniels: paragraph [0080] describes the onsite device invites friends to participate in a hot-edit event); and
perform a fourth operation for the first content (Daniels: paragraph [0081] describes onsite device A2 edits the new AR content that was previously edited by onsite device A1), and comprising moving, rotating, or deforming a display picture of the first content (Daniels: paragraph [0081] describes the onsite device A2 changes the AR content that was previously edited by onsite device A1; Tucker: paragraph [0080] describes the first device determines whether a selector has been selected to switch back to presenting the camera feed itself without separate video overlay).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Tucket for editing a camera feed or a video feed on a second display. The teachings of Tucker, when implemented in the Daniels system, will allow one of ordinary skill in the art to apply virtuality technique to participants of a meeting. One of ordinary skill in the art would be motivated to utilize the teachings of Tucker in the Daniels system in order to provide a virtual immersion experience to a meeting participants.
As for claim 5, the combined system of Daniels and Tucker teaches wherein the first content comprises multimedia content required by the conference (Tucker: paragraph [0028] describes slide presentation for a conference).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Tucket for providing slide presentation for a conference. The teachings of Tucker, when implemented in the Daniels system, will allow one of ordinary skill in the art to applying virtuality technique to participants of a meeting. One of ordinary skill in the art would be motivated to utilize the teachings of Tucker in the Daniels system in order to provide a virtual immersion experience to a meeting participants.
As for claim 7, the combined system of Daniels and Tucker teaches wherein the second device comprises a first control configured to trigger conference joining and comprises a second control configured to trigger conference creating (Daniels: paragraph [0071] describes the MDD creates AR data or content based on user instructions it receives through the user interface of the AR application; paragraphs [0077] and [0080] describe onsite devices start an AR application and then connect to the central system and the user of onsite device invites one or more friends to participate in the events; Tucker: paragraphs [0028] and [0063] describe a meeting as augmented reality meeting).
As for claim 8, the combined system of Daniels and Tucker teaches wherein, the second device is further configured to adjust an orientation and/or a focal length of the camera so as to adjust a browsing angle and/or clarity of the first content (Tucker: paragraph [0065] describes the AR software is used to render the separate video feed on the remote user’s display such that the separate video feed appears, via the remote user’s display, in a first angular orientation corresponding to a second angular orientation of the TV display itself as shown in the camera feed).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Tucket for using an AR software to render video feed on a remote user’s display. The teachings of Tucker, when implemented in the Daniels system, will allow one of ordinary skill in the art to applying virtuality technique to participants of a meeting. One of ordinary skill in the art would be motivated to utilize the teachings of Tucker in the Daniels system in order to further enhance a virtual immersion experience to a meeting participants.
As for claim 9, the combined system of Daniels and Tucker teaches wherein the first content comprises a video, a picture, a text, a photo, or a chart (Daniels: paragraph [0004] describes AR is a live view of a real-world environment that includes computer generated sound, video, graphics, text with positioning data; paragraph [0071] describes the MDD sends AR content), and wherein the first content is two-dimensional planar content and/or three-dimensional stereoscopic content (Daniels: paragraph [0072] describes 2-D and 3-D AR content).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Daniels (US 2017/0243403) in view of Tucker (US 2023/0199148) further in view of Wang et al. (US 2023/0012929), hereinafter Wang.
As for claim 2, the combined system of Daniels and Tucker teaches wherein the system further comprising a server (Daniels: paragraph [0014] describes a server), wherein the first device is further configured to send indication information to the server (Daniels: paragraph [0087] describes a user of an onsite device provides an input, the visual result is viewed by the user, the user creates AR content change events are initiated and performed, and output is provided to the cloud server system as data input).
The combined system of Daniels and Tucker fails to teach
wherein a second device is further configured to
receive a second operation,
send a second location of the second device to the server in response to the second operation;
receive the indication information from the server when the second device is within the preset range; and
further obtain the first content based on the indication information, and
wherein the server is configured to:
determine, based on the second location, that the second device is within the preset range; and
send the indication information to the second device.
Wang discloses
receiving an operation (paragraph [0044] describes each receiving client periodically sends its location to the server which is a result from a user opening the messaging app on his or her device, or selecting a refresh option (i.e. the user’s action is an operation));
send a second location of the second device to a server in response to the second operation (paragraph [0044] describes each receiving client periodically sends its location to the server);
receive indication information from the server when the second device is within the preset range (paragraph [0044] describes the server receives message indicating a receiving client’s location, the server will identify any messages previously sent to the receiving client, by the sending clients. If the messages have a location associated with them, and if the receiving client is in a given range of the location, the message content will be sent to the receiving client; paragraphs [0051]-[0052] describe the server determines which of the identified messages should actually be notified or sent to the receiving client); and
further obtain the first content based on the indication information (paragraph [0054] describes the receiving client is within a sending distance, the server sends the message content of the multi-position message to the client, which displays the message to the user in an AR interface),
wherein the server is configured to:
determine, based on the second location, that the second device is within the preset range (paragraphs [0044] and [0049]-[0051] describe the server receives clients’ locations and a location update message from a given receiving client, the server determines which of the identified messages should actually be notified or sent to the receiving client. For each of the identified multi-position messages, the server determines the location associated with that multi-position message that is closest to the client, for each of those locations, whether the location is within a “notification distance” of the client); and
send the indication information to the second device (paragraph [0052] describes the closest location is within the notification distance, the server sends a notification of the multi-position message to the receiving client).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Wang for overlaying a camera feed on a display. The teachings of Wang, when implemented in the Daniels and Tucker system, will allow one of ordinary skill in the art to apply virtuality technique to messages that are sent to a receiver. One of ordinary skill in the art would be motivated to utilize the teachings of Wang in the Daniels and Tucker system in order to offer location tagged messages that are displayable on devices present at associated locations.
Claims 10-12, 23 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels (US 2017/0243403) in view of Cha et al. (US 2013/0054697), hereinafter Cha.
As for claim 10, Daniels teaches a method implemented by a first device (paragraph [0058] describes methods of sharing AR experience), and comprising:
receiving a first operation (paragraph [0071] describes a MDD creates augmented reality (AR) data or content, tethered to a specific location in the AR world, based on user instructions it receives);
determining a first location in a physical space (paragraphs [0065] and [0071] describe a mobile digital device (MDD) of an onsite user acquires real-world positioning data using different techniques including GPS and other data about physical position);
displaying first content in response to the first operation (paragraph [0062] describes the computer device includes an AR application configured to display AR content overlaid on a real-time view of a real-world environment; paragraphs [0088]-[0091] describe an offsite user selects a piece of AR content to view, or a geographic location to view AR content from. The ovAR application renders the AR content and background environment based on the information it receives, and updates the rendering as the ovAR application continues to receive information).
Daniels fails to teach
enabling obtaining of a first content by a second device when the second device is within a preset range of a first location.
Cha discloses
enabling obtaining of a first content by a second device when the second device is within a preset range of a first location (paragraph [0028] describes in order to share contents with a second device, a first device generates a third data; paragraphs [0030]-[0031] and [0038] describe the third data includes location data of the first device. A server uses the location data of the first device and the location data of the second device to determine if the second device is allowed to receive shared content. The second device is allowed to receive shared contents from the server if the second device is within the reference distance of the first device).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Cha for sharing contents to a device based on a determined distance. The teachings of Cha, when implemented in the Daniels system, will allow one of ordinary skill in the art to enforce data sharing security. One of ordinary skill in the art would be motivated to utilize the teachings of Cha in the Daniels system in order to regulate access to shared contents.
As for claim 11, the combined system of Daniels and Cha teaches wherein enabling obtaining of the first content by the second device comprises sending indication information of the first content to a server so that the server sends the indication information to the second device when the second device is within the preset range of the first location, and wherein the indication information enables the second device to obtain the first content (Daniels: paragraph [0071] describes the MDD sends the information about a newly created piece of AR content to the cloud server, which forwards the piece of AR content to the OSDD. The OSDD receives the AR content and the LockAR data specifying the location of the AR content; Cha: paragraph [0038] describes the server uses location information to regulate access to the shared contents. For example, the server provides access to the shared contents if the device requesting the shared contents is located within a reference location).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Cha for sharing contents to a device based on a determined distance. The teachings of Cha, when implemented in the Daniels system, will allow one of ordinary skill in the art to enforce data sharing security. One of ordinary skill in the art would be motivated to utilize the teachings of Cha in the Daniels system in order to regulate access to shared contents.
As for claim 12, the combined system of Daniels and Cha teaches wherein enabling obtaining of the first content by the second device further comprises:
obtaining a location of the second device (Cha: paragraph [0038] describes the server uses GPS data or WPS data to regulate access to the shared contents and the second device transmits location information to the server);
determining based on the location, whether the second device is within the preset range (Cha: paragraph [0038] describes the server searches for shared contents which matches the location condition); and
further sending the indication information to the second device when the second device is within the preset range (Daniels: forwards the piece of AR content to the OSDD. The OSDD receives the AR content and the LockAR data specifying the location of the AR content; Cha: paragraph [0031] describes if the second device requests shared contents, the server receives the location data of the second device, analyzes the location of the second and the first devices, and transmits the shared content if the second device is within or outside of the reference distance).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Cha for sharing contents to a device based on a determined distance. The teachings of Cha, when implemented in the Daniels system, will allow one of ordinary skill in the art to enforce data sharing security. One of ordinary skill in the art would be motivated to utilize the teachings of Cha in the Daniels system in order to regulate access to shared contents.
As for claim 23, Daniels teaches a first device (paragraphs [0061] and [0064] describe computer devices (130 and 140) of onsite and offsite augmented reality (AR) users, an onsite user in proximity to a particular physical location can carry the computer device 103, the onsite users use respective mobile digital devices (MDD)), comprising:
a memory configured to store an instruction (paragraph [0236] describes storage subsystem storing instructions); and
one or more processors coupled to the memory and configured to execute the instruction to cause the first device to (paragraph [0236] describes processor devices that execute instructions to perform operations):
receive a first operation (paragraph [0071] describes the MDD creates AR data based on user instructions it received through the user interface of an AR application);
determine a first location in a physical space (paragraphs [0065] and [0071] describe a mobile digital device (MDD) of an onsite user acquires real-world positioning data using different techniques including GPS and other data about physical position);
display first content in response to the first operation (paragraph [0071] describes the MDD creates AR data, tethered to a specific location in the AR world. The specific location of the AR data is identified with respect to environmental information within the LockAR data set; paragraph [0169] describes the LockAR is used to position AR content for creation and viewing against the backdrop of the real world. When viewing areas of the device and the AR content intersect, the device begins to display the AR content); and
enable obtaining of the first content by a second device (paragraph [0071] describes the MDD creates AR data, the MDD sends the information about the newly created piece of AR content to a cloud server, which forwards the piece of AR content to an offsite virtual digital device (OSDD)).
Daniels fails to teach
wherein a first content is sent when a second device is within a preset range of a first location.
Cha discloses
wherein a first content is sent when a second device is within a preset range of a first location (paragraph [0031] describes if a second device requests shared contents, a server receives the location data of the second device, analyzes the location of the second device and the location of a first device included in the first device's location data, and transmits the shared contents if the second device is within or outside of the reference distance.).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Cha for sharing contents to a device based on a determined distance. The teachings of Cha, when implemented in the Daniels system, will allow one of ordinary skill in the art to enforce data sharing security. One of ordinary skill in the art would be motivated to utilize the teachings of Cha in the Daniels system in order to prevent the risks of the exposure of personal information to unauthorized persons (Cha: paragraph [0007]).
As for claim 25, Daniels teaches computer program product storing computer instructions which, when executed by a computing device one or more processors, cause a first device to (paragraph [0236] describes a storage subsystem storing instructions and processor devices that execute instructions to perform operations):
receive a first operation (paragraph [0071] describes the MDD creates AR data based on user instructions it received through the user interface of an AR application);
determine a first location in a physical space (paragraphs [0065] and [0071] describe a mobile digital device (MDD) of an onsite user acquires real-world positioning data using different techniques including GPS and other data about physical position);
display first content in response to the first operation (paragraph [0071] describes the MDD creates AR data, tethered to a specific location in the AR world. The specific location of the AR data is identified with respect to environmental information within the LockAR data set; paragraph [0169] describes the LockAR is used to position AR content for creation and viewing against the backdrop of the real world. When viewing areas of the device and the AR content intersect, the device begins to display the AR content); and
enable obtaining of the first content by a second device (paragraph [0071] describes the MDD creates AR data, the MDD sends the information about the newly created piece of AR content to a cloud server, which forwards the piece of AR content to an offsite virtual digital device (OSDD)).
Daniels fails to teach
wherein a first content is sent when a second device is within a preset range of a first location.
Cha discloses
wherein a first content is sent when a second device is within a preset range of a first location (paragraph [0031] describes if a second device requests shared contents, a server receives the location data of the second device, analyzes the location of the second device and the location of a first device included in the first device's location data, and transmits the shared contents if the second device is within or outside of the reference distance.).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Cha for sharing contents to a device based on a determined distance. The teachings of Cha, when implemented in the Daniels system, will allow one of ordinary skill in the art to enforce data sharing security. One of ordinary skill in the art would be motivated to utilize the teachings of Cha in the Daniels system in order to prevent the risks of the exposure of personal information to unauthorized persons (Cha: paragraph [0007]).
Claims 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels (US 2017/0243403) in view of Cha (US 2013/0054697) further in view of Tucker (US 2023/0199148).
As for claim 13, the combined system of Daniels and Cha fails to teach wherein after displaying a first content, the method further comprises:
receiving a second operation on the first content; and
performing, in response to a second operation, a third operation on the first content, and comprising switching a display picture of the first content, ending the display picture, pausing the display picture, or starting the display picture.
Tucker discloses
wherein after displaying a first content (paragraph [0061] describe a slide of a slide presentation or other shared digital document is presented on a TV display), a first device is further configured to:
receive a second operation for the first content (paragraph [0061] describes the slide is being presented on a TV display also establishes a separate video feed that is to also be overlaid on a portion of a camera feed showing a field of view of a conference room location); and;
perform, in response to the second operation, a third operation for the first content (paragraph [0061] describes the camera feed is being transmitted to a remote-located conference participant), and comprising switching, pausing, or starting a display picture of the first content (paragraphs [0061]-[0062] describe the camera feed is being transmitted to a remotely-located user’s device for overlay on the camera feed); and
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Tucket for providing a camera feed or a video feed on a second display. The teachings of Tucker, when implemented in the Daniels and Cha system, will allow one of ordinary skill in the art to apply virtuality technique to participants of a meeting. One of ordinary skill in the art would be motivated to utilize the teachings of Tucker in the Daniels and Cha system in order to provide a virtual immersion experience to a meeting participants.
As for claim 14, the combined system of Daniels and Cha fails to teach wherein after displaying the first content (paragraph [0061] describe a slide of a slide presentation or other shared digital document is presented on a TV display), the method further comprises:
receiving a second operation on the first content (paragraph [0061] describes the slide is being presented on a TV display also establishes a separate video feed that is to also be overlaid on a portion of a camera feed showing a field of view of a conference room location); and
performing, in response to the second operation, a third operation on the first content (paragraph [0061] describes the camera feed is being transmitted to a remote-located conference participant), and comprising moving a display picture of the first content, rotating the display picture, or deforming the display picture (.
Tucker discloses
wherein after displaying the first content, the method further comprises:
receiving a second operation on the first content; and
performing, in response to the second operation, a third operation on the first content, and comprising moving a display picture of the first content, rotating the display picture, or deforming the display picture (paragraphs [0061]-[0062] describe the camera feed is being transmitted to a remotely-located user’s device for overlay on the camera feed); and
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Tucket for providing a camera feed or a video feed on a second display. The teachings of Tucker, when implemented in the Daniels and Cha system, will allow one of ordinary skill in the art to apply virtuality technique to participants of a meeting. One of ordinary skill in the art would be motivated to utilize the teachings of Tucker in the Daniels and Cha system in order to provide a virtual immersion experience to a meeting participants.
As for claim 15, the combined system of Daniels and Cha fails to teach wherein the first content comprises multimedia content required by the conference.
Tucker discloses
wherein the first content comprises multimedia content required by the conference (paragraph [0028] describes slide presentation for a conference).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Tucket for providing slide presentation for a conference. The teachings of Tucker, when implemented in the Daniels and Cha system, will allow one of ordinary skill in the art to applying virtuality technique to participants of a meeting. One of ordinary skill in the art would be motivated to utilize the teachings of Tucker in the Daniels and Cha system in order to provide a virtual immersion experience to a meeting participants.
Claims 17-18, 20-22, 24 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels (US 2017/0243403) in view of Wang (US 2023/0012929).
As for claim 17, Daniels teaches a presentation method implemented by a second device and comprising:
obtaining first content when the second device is within a proximity of a first location in a physical space determined by the first device (paragraph [0061] describes an onsite user in proximity to a particular physical location can carry a computer device and there are multiple onsite users; paragraphs [0077]-[0078] describe onsite devices gather positional and environmental data and locate their respective user’s position using a combination of GPS and LockAR techniques; paragraphs [0080]-[0081] and [0086] describe onsite device A1 sends AR content to the other devices. The user of onsite device A1 creates a piece of AR content, which is also displayed at other participating devices. On site device A2 edits the new AR content that was previously edited by onsite device A1);
capturing an image using a camera of the second device (paragraphs [0046]-[0047] describe mobile devices enable their users to experience AR at a variety of different locations, these mobile devices include a variety of on-board sensors and associated data processing systems that enable the mobile device to obtain measurements of the surrounding real-world environment. Examples of these sensors include a camera); and
displaying the first content on the image so that a display location of the first content in the image overlaps the first location (paragraph [0071] describes the AR application of the OSDD places the received AR content within an offsite, simulated, virtual background/environment. A second user 2 sees an offsite virtual augmented reality, which is a completely VR experience substantially resembling the augmented reality seen by User 1 onsite; paragraphs [0088]-[0091] describe an offsite user selects a piece of AR content to view, or a geographic location to view AR content from. The ovAR application renders the AR content and background environment based on the information it receives, and updates the rendering as the ovAR application continues to receive information).
Daniels fails to teach
wherein a proximity is a preset range;
wherein a content is displayed in a superimposing manner.
Wang discloses
wherein a proximity is a preset range (paragraph [0044] describes a server identifies messages that are sent by sending clients, if the messages have location associated with them, if the receiving client is in the associated location (i.e. within a given range of that location), the message content will be sent to the receiving client such that it can be displayed on the receiving device, .e.g. using an AR approach);
wherein a content is displayed in a superimposing manner (paragraphs [0015] and [0063] describe the message is displayed on an AR display. An AR display is one which overlays (i.e. superimposes) display graphic on a real world environment, in a first type of AR display, display graphics are overlaid on an image taken from a camera).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Wang for overlaying a camera feed on a display. The teachings of Wang, when implemented in the Daniels system, will allow one of ordinary skill in the art to apply virtuality technique to messages that are sent to a receiver. One of ordinary skill in the art would be motivated to utilize the teachings of Wang in the Daniels system in order to offer location tagged messages that are displayable on devices present at associated locations.
As for claim 18, the combined system of Daniels and Wang teaches
receiving an operation (Wang: paragraph [0044] describes each receiving client periodically sends its location to the server which is a result from a user opening the messaging app on his or her device, or selecting a refresh option (i.e. the user’s action is an operation));
sending a location of the second device to a server in response to the operation (Wang: paragraph [0044] describes each receiving client periodically sends its location to the server);
receiving indication information of the first content from the server when the second device is within the preset range (Wang: paragraph [0044] describes the server receives message indicating a receiving client’s location, the server will identify any messages previously sent to the receiving client, by the sending clients. If the messages have a location associated with them, and if the receiving client is in a given range of the location, the message content will be sent to the receiving client; paragraphs [0051]-[0052] describe the server determines which of the identified messages should actually be notified or sent to the receiving client); and
further obtaining the first content based on the indication information (Wang: paragraph [0054] describes the receiving client is within a sending distance, the server sends the message content of the multi-position message to the client, which displays the message to the user in an AR interface).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Wang for overlaying a camera feed on a display. The teachings of Wang, when implemented in the Daniels system, will allow one of ordinary skill in the art to apply virtuality technique to messages that are sent to a receiver. One of ordinary skill in the art would be motivated to utilize the teachings of Wang in the Daniels system in order to offer location tagged messages that are displayable on devices present at associated locations.
As for claim 20, the combined system of Daniels and Wang teaches wherein after displaying the first content (Daniels: paragraph [0091] describes the ovAR application renders the AR content), the method further comprises:
obtaining information indicating a second operation for the first content and received by a first device (Daniels: paragraphs [0090]-[0092] describe the ovAR application queries the server for the information needed for display/interaction with the pieces of AR content); and
performing a third operation for the first content (Daniels: paragraph [0091] describes the server streams the information needed to display the piece of AR content back to the ovAR application in real time), and comprising switching a display picture of the first content, ending the display picture, pausing the display picture, or starting the display picture (Daniels: paragraph [0091] describes the ovAR renders the AR content and background environment based on information it receives, and updates the rendering as the ovAR application continues to receive information).
As for claim 21, the combined system of Daniels and Wang teaches wherein after displaying the first content (Daniels: paragraph [0091] describes the ovAR application renders the AR content), the method further comprises:
obtaining information indicating a second operation for the first content and received by a first device (Daniels: paragraphs [0090]-[0092] describe the ovAR application queries the server for the information needed for display/interaction with the pieces of AR content); and
performing a third operation for the first content (Daniels: paragraph [0091] describes the server streams the information needed to display the piece of AR content back to the ovAR application in real time), and comprising moving a display picture, rotating the display picture, or deforming the display picture (Wang: paragraphs [0075]-[0076] describe images, video that appear to be pinned or tagged to an object can be moved to another location in an environment).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Wang for overlaying a camera feed on a display. The teachings of Wang, when implemented in the Daniels system, will allow one of ordinary skill in the art to apply virtuality technique to messages that are sent to a receiver. One of ordinary skill in the art would be motivated to utilize the teachings of Wang in the Daniels system in order to offer location tagged messages that are displayable on devices present at associated locations.
As for claim 22, the combined system of Daniels and Wang teaches further comprising adjusting an orientation and/or a focal length of the camera so as to adjust a browsing angle and/or a clarity of the first content (Daniels: paragraph [0085] describes the users offsite devices can choose their own respective points of view (POV) in the virtual AR scene. For example, the user of an offsite device can choose to a third-person POV with respect to another user’s, such that part or all of the avatar is visible on the screen of the offsite device, and any movement of the avatar moves a camera a corresponding amount; Wang: Fig. 5B; paragraph [0070] describes a view captured by a camera or cameras of a smartphone. The image data is captured in real time and is dynamically adjusted as the device and camera(s) move (i.e. adjusting orientation) the illustrated picture contains depth information).
As for claim 24, Daniels teaches a second device (paragraphs [0061] and [0064] describe computer devices (130 and 140) of onsite and offsite augmented reality (AR) users, an onsite user in proximity to a particular physical location can carry the computer device 103, the onsite users use respective mobile digital devices (MDD)), comprising:
a camera configured to capture an image (paragraphs [0046]-[0047] describe mobile devices enable their users to experience AR at a variety of different locations, these mobile devices include a variety of on-board sensors and associated data processing systems that enable the mobile device to obtain measurements of the surrounding real-world environment. Examples of these sensors include a camera);
a memory (paragraph [0236] describes storage subsystem storing instructions); and
one or more processors coupled to the memory and configured to execute the instruction to cause the second device to (paragraph [0236] describes processor devices that execute instructions to perform operations):
obtain first content (paragraph [0071] describes an offsite digital device (OSDD) receives the AR content and the LockAR data specifying the location of the AR content; paragraphs [0080] and [0086] describe onsite device A1 sends AR content to the other devices);
display the first content on the image so that a display location of the first content overlaps the first location (paragraph [0071] describes the AR application of the OSDD places the received AR content within an offsite, simulated, virtual background/environment. A user2 sees an offsite virtual augmented reality, which is a completely VR experience substantially resembling the augmented reality seen by User 1 onsite; paragraphs [0088]-[0091] describe an offsite user selects a piece of AR content to view, or a geographic location to view AR content from. The ovAR application renders the AR content and background environment based on the information it receives, and updates the rendering as the ovAR application continues to receive information).
Daniels fails to teach
wherein a second device is within a preset range of a first location in a physical space;
wherein content is displayed in a superimposing manner.
Wang discloses
wherein a second device is within a preset range of a first location in a physical space;
wherein content is displayed in a superimposing manner.
Wang discloses
wherein a second device is within a preset range of a first location in a physical space (paragraph [0044] describes a server identifies messages that are sent by sending clients, if the messages have location associated with them, if the receiving client is in the associated location (i.e. within a given range of that location), the message content will be sent to the receiving client such that it can be displayed on the receiving device, .e.g. using an AR approach);
wherein a content is displayed in a superimposing manner (paragraphs [0015] and [0063] describe the message is displayed on an AR display. An AR display is one which overlays (i.e. superimposes) display graphic on a real world environment, in a first type of AR display, display graphics are overlaid on an image taken from a camera).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Wang for overlaying a camera feed on a display. The teachings of Wang, when implemented in the Daniels system, will allow one of ordinary skill in the art to apply virtuality technique to messages that are sent to a receiver. One of ordinary skill in the art would be motivated to utilize the teachings of Wang in the Daniels system in order to offer location tagged messages that are displayable on devices present at associated locations.
As for claim 26, Daniels teaches a computer program product storing computer instructions which, when executed by one or more processors, cause a second device to (paragraph [0236] describes a storage subsystem storing instructions and processor devices that execute instructions to perform operations):
obtain first content when the second device is within a proximity of a first location in a physical space determined by the first device (paragraph [0061] describes an onsite user in proximity to a particular physical location can carry a computer device and there are multiple onsite users; paragraphs [0077]-[0078] describe onsite devices gather positional and environmental data and locate their respective user’s position using a combination of GPS and LockAR techniques; paragraphs [0080]-[0081] and [0086] describe onsite device A1 sends AR content to the other devices. The user of onsite device A1 creates a piece of AR content, which is also displayed at other participating devices. On site device A2 edits the new AR content that was previously edited by onsite device A1);
capture an image using a camera of the second device (paragraphs [0046]-[0047] describe mobile devices enable their users to experience AR at a variety of different locations, these mobile devices include a variety of on-board sensors and associated data processing systems that enable the mobile device to obtain measurements of the surrounding real-world environment. Examples of these sensors include a camera); and
display the first content on the image so that a display location of the first content in the image overlaps the first location (paragraph [0071] describes the AR application of the OSDD places the received AR content within an offsite, simulated, virtual background/environment. A second user 2 sees an offsite virtual augmented reality, which is a completely VR experience substantially resembling the augmented reality seen by User 1 onsite; paragraphs [0088]-[0091] describe an offsite user selects a piece of AR content to view, or a geographic location to view AR content from. The ovAR application renders the AR content and background environment based on the information it receives, and updates the rendering as the ovAR application continues to receive information).
Daniels fails to teach
wherein a proximity is a preset range;
wherein a content is displayed in a superimposing manner.
Wang discloses
wherein a proximity is a preset range (paragraph [0044] describes a server identifies messages that are sent by sending clients, if the messages have location associated with them, if the receiving client is in the associated location (i.e. within a given range of that location), the message content will be sent to the receiving client such that it can be displayed on the receiving device, .e.g. using an AR approach);
wherein a content is displayed in a superimposing manner (paragraphs [0015] and [0063] describe the message is displayed on an AR display. An AR display is one which overlays (i.e. superimposes) display graphic on a real world environment, in a first type of AR display, display graphics are overlaid on an image taken from a camera).
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Wang for overlaying a camera feed on a display. The teachings of Wang, when implemented in the Daniels system, will allow one of ordinary skill in the art to apply virtuality technique to messages that are sent to a receiver. One of ordinary skill in the art would be motivated to utilize the teachings of Wang in the Daniels system in order to offer location tagged messages that are displayable on devices present at associated locations.
Claims 19 rejected under 35 U.S.C. 103 as being unpatentable over Daniels (US 2017/0243403) in view of Wang (US 2023/0012929) further in view of Zheng et al. (US 2022/0124685), hereinafter Zheng.
As for claim 19, the combined system of Daniels and Wang teaches
receiving an operation (Wang: paragraph [0044] describes each receiving client periodically sends its location to the server which is a result from a user opening the messaging app on his or her device, or selecting a refresh option (i.e. the user’s action is an operation));
sending a location of the second device to an entity in response to the operation (Wang: paragraph [0044] describes each receiving client periodically sends its location to the server);
receiving indication information of the first content from the first entity when the second device is within the preset range (Wang: paragraph [0044] describes the server receives message indicating a receiving client’s location, the server will identify any messages previously sent to the receiving client, by the sending clients. If the messages have a location associated with them, and if the receiving client is in a given range of the location, the message content will be sent to the receiving client; paragraphs [0051]-[0052] describe the server determines which of the identified messages should actually be notified or sent to the receiving client); and
further obtaining the first content based on the indication information (Wang: paragraph [0054] describes the receiving client is within a sending distance, the server sends the message content of the multi-position message to the client, which displays the message to the user in an AR interface).
The combined system of Daniels and Wang fails to teach wherein an entity is a first device.
Zheng discloses
wherein an entity is a first device (paragraphs [0015] and [0033]-[0034] describe a computing device’s wireless interface receives location of a head wearable displace (HWD) and transfers image data describing an image to be rendered) .
One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Zheng for implementing electronic devices to obtain location information. The teachings of Zheng, when implemented in the Daniels and Wang system, will allow one of ordinary skill in the art to determine a view of an artificial reality corresponding to the location of a device. One of ordinary skill in the art would be motivated to utilize the teachings of Zheng in the Daniels and Wang system in order to transmit to a device augmented reality content using the location of the device.
Allowable Subject Matter
Claims 6 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 6, the claim recites the limitations “The system of claim 5, further comprising a server, wherein the first device is further configured to display an option corresponding multimedia are content that is classified according to labels of "latest," "local," and "cloud," wherein "latest" indicates the multimedia has been recently browsed on the first device, wherein "local" indicates the multimedia content is locally stored in the first device; and wherein "cloud" indicates the multimedia content is stored in the server.
Kocharlakota et al. (US 2020/0005542) disclose a personalized communication augmented reality (PACR) framework. A server maintains a PCAR database, updates the PCAR database to include association between selected augmentation target and an item of AR content (see paragraphs [0116]-[0117]). Kocharlakota discloses a server maintaining a database that includes parameters that define the association between a selected augmentation target and an item of AR content. Kocharlakota, however, fails to teach what is claimed.
Regarding claim 16, the claim recites the limitations “the method according to of claim 15, wherein the method further comprises:
displaying, by the first device, an option corresponding to one or more pieces of multimedia content, wherein the one or more pieces of multimedia content that is classified according to labels of "latest," "local," and "cloud," wherein the "latest" indicates the multimedia content has been recently browsed on the first device, and wherein "local" indicates the multimedia content locally stored in the first device; and wherein "cloud" indicates the multimedia content is stored in the server.”
Claim 16 is objected for the same reasons given to claim 6.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Thiel (US 2023/0199148) teaches virtual environment streaming to a video communications platform
Kuhn et al. (US 11,087,559) teach managing augmented reality content associated with a physical location
Antypov (US 2020/0051335) teaches augmented reality user interface including dual representation of physical location
Ikeda et al. (US 2022/0382390) teach technology to generate first location data with respect to a real space and second location data regarding location of another device.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to L. T N. whose telephone number is (571)272-1013. The examiner can normally be reached M & Th 5:30 am - 2:30 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TONIA DOLLINGER can be reached at 571-272-4170. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/L. T. N/
Examiner, Art Unit 2459
/TONIA L DOLLINGER/Supervisory Patent Examiner, Art Unit 2459