DETAILED ACTION
Notice relating to Pre-AIA or AIA Status
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 22 DECEMBER 2025 has been entered.
Status of the Claims
Applicant’s current amendment (dated 22 DECEMBER 2025), has been entered. The status of the claims is as follows: Claims 1-20 are currently pending in the application.
Response to Arguments
Applicant’s arguments with respect to the claims have been considered but are moot because the arguments do not apply to the new reference(s) and/or citations being used in the current rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 4, 10-13, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cotter, US 2022/0193542 in view of Ye, US 2021/0370186 and further in view of Kawakami et al., US 2021/0321061 and Joyce et al., US 2019/0385371.
Regarding claim 1, Cotter discloses a method of implementing multi-streamer live streaming and multi-streamer interactions by a streamer client device, comprising:
receiving, by a device, virtual space information from a server computing device (with virtual environment created by a video game; page 9, paragraph 69, and wherein with data transferred from at least a server; pages 3-4, paragraphs 24 and 29, and Fig. 3), wherein the streamer device is associated with a first streamer (with multiple, i.e. including a first, client device; Figs. 10 and 11), and the virtual space information comprises information indicative of a virtual space (again with virtual environment created by a video game; page 9, paragraph 69);
determining, by the device, first virtual character attribute information indicating attributes of a first virtual character corresponding to the first streamer (with at least the first streamer/user playing the game, i.e. player/user A or a first virtual character, and again in virtual environment with characters, i.e. including a first character; page 9, paragraph 69, and with information, i.e. attributes, about the particular character, i.e. in this instance, player 1 or first virtual character; page 9, paragraph 73, and with at least streaming; page 1, paragraph 17, and pages 3-4, paragraph 29, and live content; page 4, paragraph 32, and page 7, paragraph 56, and with multiple, i.e. including a first, client device; Figs. 10 and 11);
determining, by the device, first live streaming view information based on the first virtual character attribute information (the information is used to update the viewing at the particular client device for the particular character, i.e. first virtual character; page 9, paragraph 73, and wherein with a particular view/perspective of the character; page 9, paragraphs 69-70, and again with at least streaming; page 1, paragraph 17, and pages 3-4, paragraph 29, and live content; page 4, paragraph 32, and page 7, paragraph 56);
detecting, by the device, whether the virtual space information comprises information indicative of a second streamer (can determine whether there are multiple players in the game, in order to be able to combine the various feeds/information, i.e. in this instance, determining that four players are playing; page 9, paragraph 69);
acquiring, by the device, second virtual character attribute information indicating attributes of a second virtual character corresponding to the second streamer in response to detecting that the virtual space information comprises the information indicative of the second streamer (with at least a second streamer/user playing the game, i.e. player/user B or a second virtual character, and again in virtual environment with characters, i.e. virtual characters; page 9, paragraph 69, and with information, i.e. attributes, about the particular character, i.e. in this instance, player B or second character; page 9, paragraph 73, and with multiple, i.e. including a second, client device; Figs. 10 and 11), and wherein the second streamer client device corresponds to live streaming view information that is different from the first live streaming view information (different view/perspective of the different player/streamer, i.e. such as the second virtual character; page 9, paragraphs 69-70, and again with at least streaming; page 1, paragraph 17, and pages 3-4, paragraph 29, and live content; page 4, paragraph 32, and page 7, paragraph 56);
generating, by the device, the virtual space based on the virtual space information (with virtual environment created by a video game; page 9, paragraphs 69, and generate and render multi-view interface of the virtual environment/game; page 9, paragraph 72, and Fig. 9, and page 9, paragraphs 69-70);
mapping, by the device, the first virtual character, and the second virtual character into the generated virtual space based on the first live streaming view information, the first virtual character attribute information, the second virtual character attribute information (mapping/combining the characters into the virtual environment/game; page 9, paragraph 72, and Fig. 9, and based on the various view information of the characters/players, as well as the information about characters/players; page 9, paragraphs 69-70 and 73); and
generating and displaying, by the device, images each of which comprises the first virtual character and the second virtual characters in the generated virtual space (can generate and render multi-view interface image(s) on at least a client device which show at least the first and second characters, i.e. including the first streamer/user and second streamer/user; page 9, paragraph 72, and Fig. 9, and again based on the various view information of the characters/players, as well as the information about characters/players; page 9, paragraphs 69-70 and 73).
While Cotter does allude to operations performed locally (local display and execution of game instance(s); page 10, paragraphs 81-82), and discloses for the multi-streamer live streaming (with multiple users/players playing a game in a virtual environment; page 9, paragraph 69, and with at least streaming; page 1, paragraph 17, and pages 3-4, paragraph 29, and live content; page 4, paragraph 32, and page 7, paragraph 56), and the generated virtual space (virtual environment created by a video game; page 9, paragraph 69), Cotter does not explicitly disclose operations performed by a streamer client device;
a target character, a target streamer, and target view information;
a reference streamer, and a reference character;
detecting, by a device, whether a distance between the target character and the reference character in a space satisfies a predetermined threshold; and
rendering, by the device, interaction behaviors of the target character and the reference character in the space in response to detecting that the distance between the target character and the reference character satisfies the predetermined threshold.
In a related art, Ye does disclose a target character, a target streamer, and target view information (can be related to a followed, i.e. target, player/character of a streamer, which will include their character and view information; page 3, paragraph 37, and pages 6-7, paragraph 56, and page 8, paragraph 61, and wherein based on streaming of the user playing; page 3, paragraph 37, and page 8, paragraph 62), a reference streamer, and a reference character (can be related to a different, i.e. reference, player/character of a streamer, which will include their character and view information; pages 6-7, paragraph 56, and again based on streaming of the user playing; page 3, paragraph 37, and page 8, paragraph 62), and various attribute information (with game state/play data, which represents information about the characters/players and is used for view information; page 8, paragraph 64).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Cotter and Ye by allowing certain characters/players to be certain types, such as target/reference, in order to provide an improved system and method for recommending streams of game play of a video game to users that are interested in the game play of the video game (Ye; page 1, paragraph 6).
While Cotter in view of Ye does again allude to local processing (Cotter; local display and execution of game instance(s); page 10, paragraphs 81-82m and Ye; local execution; page 5, paragraph 49), Cotter in view of Ye does not explicitly disclose operations performed by a streamer client device;
detecting, by a device, whether a distance between a first character and a second character in a space satisfies a predetermined threshold; and
rendering, by the device, interaction behaviors of the first character and the second character in the space in response to detecting that the distance between the first character and the second character satisfies the predetermined threshold.
In a related art, Kawakami does disclose operations performed by a streamer client device (local processing by at least one of the terminals; page 12, paragraph 186, and can process, generate, and display at least first distributor/client and second viewer/participant/client together in the same virtual space; page 12, paragraphs 184-185, and Fig. 6B, elements 10 and 30, and page 5, paragraphs 88-89, and wherein with live content; page 1, paragraphs 10-12, and with streaming, i.e. streamer client; page 5, paragraph 84, and page 9, paragraph 144).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Cotter, Ye, and Kawakami by allowing client side operations to be performed for shared virtual spaces, in order to provide an improved system and method for distributed video in which a plurality of distributors share the same location via a network even when the plurality of distributors are separated from each other in real space (Kawakami; page 1, paragraph 7).
Cotter in view of Ye and Kawakami does not explicitly disclose detecting, by a device, whether a distance between a first character and a second character in a space satisfies a predetermined threshold; and
rendering, by the device, interaction behaviors of the first character and the second character in the space in response to detecting that the distance between the first character and the second character satisfies the predetermined threshold.
In a related art, Joyce does disclose detecting, by a device, whether a distance between a first character and a second character in a space satisfies a predetermined threshold, and rendering, by the device, interaction behaviors of the first character and the second character in the space in response to detecting that the distance between the first character and the second character satisfies the predetermined threshold (when detected positioning of first and second users/characters are within a threshold distance, i.e. satisfying a predetermined threshold, system can then perform/render certain animation/interaction behaviors to the users/characters, such as the users/characters turning towards and/or face one another to interact; page 2, paragraphs 16-18).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Cotter, Ye, Kawakami, and Joyce by allowing particular interactions to be performed in an environment, such as the one described in Cotter in view of Ye and Kawakami, based on determined distance of objects/characters, in order to provide an improved system and method which allows a user to have an interactive experience enhanced by germane, pertinent, contextual interaction with virtual object(s) and/or character(s) placed in the view of scene (Joyce; page 1, paragraph 2).
Regarding claim 2, Cotter in view of Ye, Kawakami, and Joyce discloses determining a display type of the target virtual character corresponding to the target streamer based on the target virtual character attribute information (Cotter; based on the information, can determine the view type of character to be a first-person shooter view from the perspective of the character; page 9, paragraph 74, and Ye; can determine display type of view, i.e. overhead, side, etc.; page 13, paragraph 89, and again with at least a followed, i.e. target, player/character; page 3, paragraph 37, and pages 6-7, paragraph 56, and page 8, paragraph 61); and
determining the target live streaming view based on preset view configuration information and the display type of the target virtual character (Cotter; generating for display based on type of view and a preset configuration information, i.e. layout/location/sizing for the streams; page 9, paragraph 75, and based on configurations of the particular client; page 9, paragraph 78, and again, based on the particular view type of character to be a first-person shooter view from the perspective of the character; page 9, paragraph 74, and Ye; display type of view, i.e. overhead, side, etc.; page 13, paragraph 89).
Regarding claim 4, Cotter in view of Ye, Kawakami, and Joyce discloses the target virtual character attribute information comprises target location information and wherein the reference virtual character attribute information comprises reference location information (Cotter; with information, i.e. attributes, about the particular characters; page 9, paragraph 73, and Ye; information about the characters/player, i.e. target/reference, can include at least location/positioning information used for showing the players/characters; pages 7-8, paragraph 60, and page 9, paragraph 72), and wherein the method further comprises:
rendering the target virtual character in the virtual space based on the target location information and rendering the reference virtual character in the virtual space based on the reference location information (Ye; can present the characters/players at their particular locations, such that a first/target character is shown at a particular position/location and the second/reference character is shown at their particular position/location, i.e. car associated with a second player is coming from behind and is trying to overtake player 1’s car; pages 7-8, paragraph 60, and Cotter; Fig. 7, and page 6, paragraph 50, and Kawakami; local processing by at least one of the terminals; page 12, paragraph 186 for processing, generating, and displaying at least first distributor/client and second viewer/participant/client together in the same virtual space; page 12, paragraphs 184-185, and Fig. 6B, elements 10 and 30, and page 5, paragraphs 88-89).
Regarding claim 10, Cotter in view of Ye, Kawakami, and Joyce discloses receiving virtual character attribute information indicating attributes of an audience virtual character corresponding to a target audience (Ye; can receive information about spectator(s), i.e. audience information; page 7, paragraph 57, and data indicating location information about non-playing characters, i.e. audience/spectators; page 9, paragraph 72);
determining the audience virtual character and audience location information of the audience virtual character in the virtual space based on the virtual character attribute information (Ye; information can have relative position associated with the spectator; pages 7-8, paragraph 60, as well as location information about non-playing characters, i.e. audience/spectators; page 9, paragraph 72); and
generating and displaying at least one interactive live streaming image on the streamer client device by performing rendering based on the target live streaming view information, the audience virtual character, the audience location information, and the multi-streamer live streaming image (Ye; generated view(s); page 8, paragraph 64, and based on the various information, can generate and present at particular locations; pages 7-8, paragraph 60, and again based on relative position associated with the spectator; pages 7-8, paragraph 60, as well as location information about non-playing characters, i.e. audience/spectators; page 9, paragraph 72, and wherein interactive in that chats and other interactions can be provided; page 11, paragraph 80, and page 12, paragraph 86, and Cotter; Fig. 7, and page 6, paragraph 50, and Fig. 9, and page 9, paragraph 70, and Kawakami; local processing by at least one of the terminals; page 12, paragraph 186 for processing, generating, and displaying at least first distributor/client and second viewer/participant/client together in the same virtual space; page 12, paragraphs 184-185, and Fig. 6B, elements 10 and 30, and page 5, paragraphs 88-89).
Regarding claim 11, Cotter in view of Ye, Kawakami, and Joyce discloses determining parameter change information of the multi-streamer live streaming image in response to receiving an instruction of changing a parameter from the target streamer (Cotter; based on inputs from a particular player that effect the game; page 9, paragraph 73, and Ye; based on inputs that effect the game play/state; page 8, paragraph 64); and
updating the multi-streamer live streaming images based on the parameter change information (Cotter; video outputs, i.e. including images, updated based on the inputs; page 9, paragraph 73, and Ye; view data updated and forwarded/displayed based on the inputs; page 8, paragraph 64).
Claim 12, which discloses a device, is analyzed with respect to the citations and/or rationale provided in the rejection of similar claim 1. The following additional limitations are also disclosed:
a device comprising a memory and a processor (Cotter; with at last device(s) containing at least a processor and memory; page 10, paragraphs 85-87), wherein the memory stores computer-readable instructions that upon execution by the processor cause the processor to perform operations (Cotter; with processor executable instructions stored in the memory; page 10, paragraphs 86-88).
Claim 13, which discloses a device, is analyzed with respect to the citations and/or rationale provided in the rejection of similar claim 2.
Claim 19, which discloses a device, is analyzed with respect to the citations and/or rationale provided in the rejection of similar claim 10.
Claim 20, which discloses a non-transitory computer-readable medium, is analyzed with respect to the citations and/or rationale provided in the rejection of similar claims 1 and 12.
Claims 3, 5, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Cotter, US 2022/0193542 in view of Ye, US 2021/0370186, Kawakami et al., US 2021/0321061, and Joyce et al., US 2019/0385371, and further in view of Aoyama et al., US 2012/0306855.
Regarding claim 3, Cotter in view of Ye, Kawakami, and Joyce discloses all the claimed limitations of claim 1, as well as live streaming view information from preset view configuration information in response to determining, with a display type of the target virtual character (Cotter; generating for display based on type of view and a preset configuration information, i.e. layout/location/sizing for the streams; page 9, paragraph 75, and based on configurations of the particular client; page 9, paragraph 78, and again, based on the particular view type of character to be a first-person shooter view from the perspective of the character; page 9, paragraph 74, and Ye; can determine display type of view, i.e. overhead, side, etc.; page 13, paragraph 89); and
live streaming view information from preset view configuration information in response to determining, with the display type of the target virtual character (Cotter; generating for display based on type of view and a preset configuration information, i.e. layout/location/sizing for the streams; page 9, paragraph 75, and based on configurations of the particular client; page 9, paragraph 78, and again, based on the particular view type of character to be a first-person shooter view from the perspective of the character; page 9, paragraph 74, and Ye; display type of view, i.e. overhead, side, etc.; page 13, paragraph 89).
Cotter in view of Ye, Kawakami, and Joyce does not explicitly disclose calling planar viewing in response to determining that a display type is a planar display type, and calling stereoscopic viewing in response to determining that the display type is a stereoscopic display type.
In a related art, Aoyama does disclose calling planar viewing in response to determining that a display type is a planar display type (based on a particular condition, i.e. display type, system can call/switch to a planar display/view; page 1, paragraphs 7-10), and calling stereoscopic viewing in response to determining that the display type is a stereoscopic display type (based on a particular condition, i.e. display type, system can call/switch to a stereoscopic display/view; page 1, paragraphs 7-10).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Cotter, Ye, Kawakami, Joyce, and Aoyama by allowing different display types to be determined and changed for the content already being provided in Cotter in view of Ye, Kawakami, and Joyce, in order to provide an improved system and method for display control which reduces a sense of discomfort of the appearance of an object, which is viewed in a planner manner, when displaying the object in a stereoscopically displayed virtual space (Aoyama; page 1, paragraph 5).
Regarding claim 5, Cotter in view of Ye, Kawakami, and Joyce discloses all the claimed limitations of claim 4, as well as determining a display type of the reference virtual character corresponding to the reference streamer based on the reference virtual character attribute information (Cotter; based on the information, can determine the view type of character to be a first-person shooter view from the perspective of the character; page 9, paragraph 74, and Ye; can determine display type of view, i.e. overhead, side, etc.; page 13, paragraph 89, and again with at least a different, i.e. second/reference, player/character; pages 6-7, paragraph 56);
rendering a streamer picture of the reference virtual character in the virtual space based on the reference location information in response to determining the display type of the reference virtual character (Ye; information about the characters/player, i.e. target/reference, can include at least location/positioning information used for rendering and showing the players/characters; pages 7-8, paragraph 60, and page 9, paragraph 72, and wherein showing based on the type of view, i.e. overhead, side, etc.; page 13, paragraph 89, and again with at least a different, i.e. second/reference, player/character; pages 6-7, paragraph 56, and Cotter; displaying based on view type of character to be a first-person shooter view from the perspective of the character; page 9, paragraph 74, and Kawakami; local processing by at least one of the terminals; page 12, paragraph 186 for processing, generating, and displaying at least first distributor/client and second viewer/participant/client together in the same virtual space; page 12, paragraphs 184-185, and Fig. 6B, elements 10 and 30, and page 5, paragraphs 88-89); and
rendering a streamer picture of the reference virtual character in the virtual space based on the reference location information in response to determining the display type of the reference virtual character (Ye; information about the characters/player, i.e. target/reference, can include at least location/positioning information used for rendering and showing the players/characters; pages 7-8, paragraph 60, and page 9, paragraph 72, and wherein showing based on the type of view, i.e. overhead, side, etc.; page 13, paragraph 89, and again with at least a different, i.e. second/reference, player/character; pages 6-7, paragraph 56, and Cotter; displaying based on view type of character to be a first-person shooter view from the perspective of the character; page 9, paragraph 74, and Kawakami; local processing by at least one of the terminals; page 12, paragraph 186 for processing, generating, and displaying at least first distributor/client and second viewer/participant/client together in the same virtual space; page 12, paragraphs 184-185, and Fig. 6B, elements 10 and 30, and page 5, paragraphs 88-89).
Cotter in view of Ye, Kawakami, and Joyce does not explicitly disclose rendering a planar picture in response to determining that the display type is the planar display type, and rendering a stereoscopic picture in response to determining that the display type is the stereoscopic display type.
In a related art, Aoyama does disclose rendering a planar picture in response to determining that the display type is the planar display type, and rendering a stereoscopic picture in response to determining that the display type is the stereoscopic display type (based on a particular condition, i.e. display type, system can switch to rendering a planar or stereoscopic display/view; page 1, paragraphs 7-10).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Cotter, Ye, Kawakami, Joyce, and Aoyama by allowing different display types to be determined and changed for the content already being provided in Cotter in view of Ye, Kawakami, and Joyce, in order to provide an improved system and method for display control which reduces a sense of discomfort of the appearance of an object, which is viewed in a planner manner, when displaying the object in a stereoscopically displayed virtual space (Aoyama; page 1, paragraph 5).
Claim 14, which discloses a device, is analyzed with respect to the citations and/or rationale provided in the rejection of similar claims 4 and 5.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Cotter, US 2022/0193542 in view of Ye, US 2021/0370186, Kawakami et al., US 2021/0321061, and Joyce et al., US 2019/0385371, and further in view of Zhang et al., US 2015/0222239.
Regarding claim 6, Cotter in view of Ye, Kawakami, and Joyce discloses all the claimed limitations of claim 1, as well as the target virtual character of the target streamer and the reference virtual character of the reference streamer (Cotter; with information about the particular characters; page 9, paragraph 73, and Ye; information about the characters/player, i.e. target/reference, can include at least location/positioning information used for showing the players/characters; pages 7-8, paragraph 60, and page 9, paragraph 72, and with at least a followed, i.e. target, player/character of a streamer; page 3, paragraph 37, and pages 6-7, paragraph 56, and page 8, paragraph 61, and a different, i.e. reference, player/character of a streamer; pages 6-7, paragraph 56); and
generating and displaying the multi-streamer live streaming image based on picture(s) (Cotter; generating for display based on views, i.e. pictures, and a preset configuration information, i.e. layout/location/sizing for the streams; page 9, paragraph 75, and based on configurations of the particular client; page 9, paragraph 78, and Fig. 7, and page 6, paragraph 50, and Kawakami; local processing by at least one of the terminals; page 12, paragraph 186 for processing, generating, and displaying at least first distributor/client and second viewer/participant/client together in the same virtual space; page 12, paragraphs 184-185, and Fig. 6B, elements 10 and 30, and page 5, paragraphs 88-89).
Cotter in view of Ye, Kawakami, and Joyce does not explicitly disclose
determining a target interaction rule associated with a streamer based on character attribute information;
determining a reference interaction rule associated with a streamer based on character attribute information;
rendering a target interaction picture in which the character interacts according to the target interaction rule, and rendering a reference interaction picture in which the character interacts according to the reference interaction rule; and
based on the target interaction and the reference interaction.
In a related art, Zhang does disclose determining a target interaction rule associated with a streamer based on character attribute information (user begins attacking adversary, i.e. interaction rule, once condition is met; pages 3-4, paragraphs 35-36);
determining a reference interaction rule associated with a streamer based on character attribute information (adversary metric increases causing the adversary to attack, i.e. interaction rule, the user; page 3, paragraph 33);
rendering a target interaction picture in which the character interacts according to the target interaction rule (user begins attacking adversary; pages 3-4, paragraphs 35-36, and wherein with at least video, i.e. picture(s); page 2, paragraph 21, and page 8, paragraphs 86 and 103), and
rendering a reference interaction picture in which the character interacts according to the reference interaction rule (adversary attacking the other character; page 3, paragraph 33, and wherein with at least video, i.e. picture(s); page 2, paragraph 21, and page 8, paragraphs 86 and 103); and
based on the target interaction and the reference interaction (confrontation event between the characters, i.e. target and reference, such as engaging in battle; page 2, paragraph 25, and page 3, paragraph 33, and pages 3-4, paragraphs 35-36).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Cotter, Ye, Kawakami, Joyce, and Zhang by allowing certain rules to be utilized when performing interactions between characters/players in the game already being provided in Cotter in view of Ye, Kawakami, and Joyce, in order to provide an improved system and method for making adjustments based on a current in-game scenarios, current user action modes, and types of events that are occurring in a game (Zhang; page 1, paragraph 5).
Claim 15, which discloses a device, is analyzed with respect to the citations and/or rationale provided in the rejection of similar claim 6.
Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Cotter, US 2022/0193542 in view of Ye, US 2021/0370186, Kawakami et al., US 2021/0321061, Joyce et al., US 2019/0385371, and Zhang et al., US 2015/0222239, and further in view of Mallinson, US 2018/0095542.
Regarding claim 7, Cotter in view of Ye, Kawakami, Joyce, and Zhang discloses all the claimed limitations of claim 6, as well as determining a target location of the target virtual character corresponding to the target streamer based on target location information in the target virtual character attribute information, and determining a reference location of the reference virtual character corresponding to the reference streamer based on reference location information in the reference virtual character attribute information (Ye; information about the characters/player, i.e. target/reference, can include at least location/positioning information used for rendering and showing the players/characters; pages 7-8, paragraph 60, and page 9, paragraph 72, and again including also at least a different, i.e. second/reference, player/character; pages 6-7, paragraph 56, and Joyce; can determine positioning of first and second users/characters; page 2, paragraphs 16-18, and Zhang; can determine distance between characters, i.e. location(s); page 3, paragraph 33);
setting display of the reference virtual character in a target display state in response to determining that a distance between the reference location and the target location is less than a preset distance threshold (Zhang; when characters are within predetermined distance, can determine potential confrontation of particular character(s), i.e. display setting for interaction event; page 3, paragraph 33), wherein the target display state comprises a particular display state (Zhang; display state for confrontation event; pages 3-4, paragraphs 34-36); and
generating the reference interaction picture of the reference virtual character based on the target display state (Zhang; based on the confrontation event, interaction of adversary attacking the other character occurs; page 3, paragraph 33, and wherein with at least video, i.e. picture(s); page 2, paragraph 21, and page 8, paragraphs 86 and 103).
Cotter in view of Ye, Kawakami, Joyce, and Zhang does not explicitly disclose a display state comprises a transparent display state and a gray display state.
In a related art, Mallinson does disclose a display state comprises a transparent display state and a gray display state (can transition display between transparent and opaque, i.e. considered gray as it will be covering/obscuring, and wherein related to display state; page 11, paragraph 88, and wherein can transition based on determined distance(s); page 10, paragraphs 81-82).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Cotter, Ye, Kawakami, Joyce, Zhang, and Mallinson by allowing particular display states to occur when certain distance requirements were met, in order to provide an improved system and method for introducing objects in a virtual space of an interactive application currently rendering on a display screen and allowing a user to interact with the objects, such that as the objects are moved toward the user, a view of the display screen is transitioned (Mallinson; page 3, paragraphs 33 and 35).
Claim 16, which discloses a device, is analyzed with respect to the citations and/or rationale provided in the rejection of similar claim 7.
Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Cotter, US 2022/0193542 in view of Ye, US 2021/0370186, Kawakami et al., US 2021/0321061, Joyce et al., US 2019/0385371, and Zhang et al., US 2015/0222239, and further in view of Fargo, US 2019/0344185.
Regarding claim 8, Cotter in view of Ye, Kawakami, Joyce, and Zhang discloses all the claimed limitations of claim 6, as well as in response to determining that the distance between the target virtual character of the target streamer and the reference virtual character of the reference streamer is less than a preset distance threshold (Zhang; when characters are within predetermined distance; page 3, paragraph 33, and Joyce; determining positioning of first and second users/characters are within a threshold distance; page 2, paragraphs 16-18); and
a reference virtual character in the virtual space (Cotter; with at least a second streamer/user playing the game, i.e. player/user B or a second virtual character, and again in virtual environment with characters, i.e. virtual characters; page 9, paragraph 69, and Ye; can be related to a different, i.e. reference, player/character of a streamer; pages 6-7, paragraph 56).
Cotter in view of Ye, Kawakami, Joyce, and Zhang does not explicitly disclose in response to determining a distance, acquiring reference sound source information corresponding to a reference streamer, wherein the reference sound source information comprises sound information to be played for a character; and
playing the reference sound source information while displaying the character.
In a related art, Fargo does disclose in response to determining a distance, acquiring reference sound source information corresponding to a reference streamer, wherein the reference sound source information comprises sound information to be played for a character (based on a distance between objects/characters, system can acquire and present certain audio associated with particular players/sources; page 1, paragraphs 7-8, and pages 3-4, paragraphs 38-42, and wherein for presentation with the character; page 3, paragraph 38, and Fig. 1, elements 135, and Fig. 3F, elements 135, and page 7, paragraph 67); and
playing the reference sound source information while displaying the character (for presentation with display of the character; page 3, paragraph 38, and Fig. 1, elements 135, and Fig. 3F, elements 135, and page 7, paragraph 67).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Cotter, Ye, Kawakami, Joyce, Zhang, and Fargo by allowing sounds, in additional to those already provided in Cotter in view of Ye, Kawakami, Joyce, and Zhang, to be included in the content, based on particular distance requirements, in order to provide an improved system and method for utilizing voice chat communication between multiple participants to control aspects of a virtual environment (Fargo; page 1, paragraph 7).
Claim 17, which discloses a device, is analyzed with respect to the citations and/or rationale provided in the rejection of similar claim 8.
Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Cotter, US 2022/0193542 in view of Ye, US 2021/0370186, Kawakami et al., US 2021/0321061, and Joyce et al., US 2019/0385371, and further in view of Gomez Diaz et al., US 10,762,878.
Regarding claim 9, Cotter in view of Ye, Kawakami, and Joyce discloses all the claimed limitations of claim 1, as well as the target live streaming view information is associated with a view of live streaming display on the streamer client device (Cotter; based on view from the perspective of the character; page 9, paragraph 74, and with at least streaming; page 1, paragraph 17, and pages 3-4, paragraph 29, and live content; page 4, paragraph 32, and page 7, paragraph 56, and Ye; view such as overhead, side, etc.; page 13, paragraph 89, and again with at least a followed, i.e. target, player/user; page 3, paragraph 37, and pages 6-7, paragraph 56, and page 8, paragraph 61, and Kawakami; local processing by at least one of the terminals; page 12, paragraph 186, and can process, generate, and display at least first distributor/client and second viewer/participant/client together in the same virtual space; page 12, paragraphs 184-185, and Fig. 6B, elements 10 and 30, and page 5, paragraphs 88-89), and wherein the live streaming view information comprises information indicative of a view orientation (Cotter; perspective, i.e. orientation, of the view; page 9, paragraphs 69 and 74-75, and Ye; perspective, i.e. orientation, of the player; page 7, paragraph 56, and can include information on direction, angle, i.e. orientation(s); page 11, paragraph 80).
Cotter in view of Ye, Kawakami, and Joyce does not explicitly disclose information indicative of a view orientation, information indicative of a view focal length, and information indicative of a view adjustment.
In a related art, Gomez Diaz does disclose information indicative of a view orientation, information indicative of a view focal length, and information indicative of a view adjustment (information used to modify image, wherein the information relates to at least focal point of user and object, i.e. length, orientation, and viewpoint related to an angular displacement/distance, i.e. adjustment; col. 4, lines 5-24, and Fig. 3B, elements 305 and 306, and col. 5, lines 39-55).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the prior art of Cotter, Ye, Kawakami, Joyce, and Gomez Diaz by allowing view adjustment information to be utilized with the already provided perspectives/views of Cotter in view of Ye, Kawakami, and Joyce, in order to provide an improved system and method for creating an immersive experience that tracks orientation of a display based on an intended focal point and modifies one or more properties of video (Gomez Diaz; col. 1, lines 51-67).
Claim 18, which discloses a device, is analyzed with respect to the citations and/or rationale provided in the rejection of similar claim 9.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RANDY A FLYNN whose telephone number is (571)270-5680. The examiner can normally be reached Monday - Thursday, 6:00am - 3:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BENJAMIN BRUCKART can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RANDY A FLYNN/Primary Examiner, Art Unit 2424