DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see p. 9-10, filed November 20, 2025, with respect to the objections to the specification and the interpretation under 35 U.S.C. 112(f) have been fully considered and are persuasive. The objections to the specification and the interpretation under 35 U.S.C. 112(f) of Claim 13 have been withdrawn.
Applicant’s arguments with respect to claim(s) 1-15 have been considered but are moot because new grounds of rejection are made in view of Lyren (US 20200074746A1) and Wang (US 20230205915A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 6-9, and 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chanda (US 20200272324A1) in view of Lyren (US 20200074746A1).
As per Claim 1, Chanda teaches a work support method for supporting work performed by a plurality of users including a target user on at least one object in a virtual space where the at least one object is placed (in a collaboration system, where users wish to share viewports, the system implementing the collaborative workspace, [0040], digital objects are arranged in the workspace, digital assets can be linked to events, where the events have locations in the virtual workspace and involve interactions with the graphical objects, [0047], interactions with the virtual workspace are handled as events, people can interact with the workspace, events have data that can define a graphical object to be displayed on a physical display, and a user interaction, such as creation, modification, movement within the workspace and deletion of a graphical object, [0051]), the work support method comprising: obtaining first information including at least one of sound information based on speech by at least one user among the plurality of users, input information based on input from the at least one user among the plurality of users, or schedule information based on a plan about the work; obtaining second information indicating manipulation of the at least one object by the target user; determining whether the manipulation by the target user is to be applied to one or more other users among the plurality of users based on the first information; generating images each viewed by a corresponding one of the one or more other users based on a result of the determining and the second information; and outputting the images that are generated to terminals of the one or more other users (workflow procedures invoked by gestures on graphical objects in the virtual workspace can include grouping graphical objects (via a corner gesture that forms a box surrounding the graphical objects), where a specific workflow procedure can then be applied to all of the graphical objects in the group based on a subsequent gesture, displaying a pop-up window to the user to allow the user to identify a target user with whom the user wants to share the graphical object, sharing a graphical object with another user (force the target user to see a graphical object by changing the target user’s viewport to include the graphical object), [0071]).
However, Chanda does not teach the plurality of users being in the virtual space, wherein in the generating of the images, when a viewpoint of the target user viewing the at least one object and a viewpoint of the corresponding one of the one or more other users viewing the at least one object are different, an image in which the manipulation of the at least one object by the target user is reflected is generated, the at least one object being viewed by the corresponding one of the one or more other users from a position in the virtual space. However, Lyren teaches an example in which users play a war game. Each player carries a pretend gun that fires virtual ammunition and wears a pair of electronic glasses. A first player uses his pretend gun to shoot virtual shots at a second player. These virtual shots miss the second player but hit a wall and thus leave virtual bullet holes in the wall. When the first player, second player, or other players look at the wall, they see the wall with the bullet holes. The wall thus appears to all of the players to have been shot with bullets. During the game, the second player returns fire at the first player and a virtual bullet from the second player grazes an arm of the first player. Visual affects of the game attempt to mimic reality. In reality, this bullet would have caused a wound in the arm of the first user, so this wound is imitated in the virtual sense. The first user appears to have a wound on his arm, though the wound is virtual. When the first player or other players view the arm of the first player, they see the wound caused from the virtual bullet [0165]. Thus, the plurality of users is in the virtual space [0165]. Lyren teaches users view the virtual object from different viewpoints. Users also interact with the virtual object, such as adding components to the virtual object, deleting components from the virtual object, scaling portions of the virtual object, changing colors, size, and/or shape of the virtual object, etc. While viewing and/or interacting with the virtual object, users can see each other [0251]. Thus, one user manipulates the object, the object being viewed by the corresponding one of the other users from a position in the virtual space from the viewpoint of that other user. Thus, in the generating of the images, when a viewpoint of the target user viewing the at least one object and a viewpoint of the corresponding one of the one or more other users viewing the at least one object are different, an image in which the manipulation of the at least one object by the target user is reflected is generated, the at least one object being viewed by the corresponding one of the one or more other users from a position in the virtual space [0251].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chanda to include the plurality of users being in the virtual space, wherein in the generating of the images, when a viewpoint of the target user viewing the at least one object and a viewpoint of the corresponding one of the one or more other users viewing the at least one object are different, an image in which the manipulation of the at least one object by the target user is reflected is generated, the at least one object being viewed by the corresponding one of the one or more other users from a position in the virtual space because Lyren suggests that this way, the plurality of users in the virtual space are able to manipulate and view the virtual object as if the virtual object were a real object, and interacting with virtual objects in virtual space is useful for many situations where the users do not want to interact with real objects, for example, simulating wounds instead of the user actually getting wounded [0251, 0165].
As per Claim 6, Chanda teaches wherein in the generating, the manipulation by the target user is reflected in the images viewed by the one or more other users when the manipulation by the target user is determined to be applied to the one or more other users, and the manipulation by the target user is not reflected in the images viewed by the one or more other users when the manipulation by the target user is determined not to be applied to the one or more other users [0071].
As per Claim 7, Chanda teaches wherein in the generating, when the manipulation by the target user is determined to be applied to the one or more other users, the manipulation by the target user is reflected in an image viewed by at least one specific user among the plurality of users and is not reflected in an image viewed by a user other than the at least one specific user among the one or more other users [0071].
As per Claim 8, Chanda teaches wherein the at least one specific user is determined in advance for each of the plurality of users [0071].
As per Claim 9, Chanda teaches wherein the at least one specific user is determined according to input from the target user in a period in which the manipulation by the target user is determined to be applied to the one or more other users [0071].
As per Claim 12, Chanda teaches wherein the manipulation of the at least one object includes at least one of moving, rotating, enlarging, or shrinking the at least one object (while the object is being moved/resized by dragging, a series of volatile events is set to the server, and re-broadcast to all clients subscribed to the workspace, [0343]).
As pe Claim 13, Claim 13 is similar in scope to Claim 1, except that Claim 13 is directed to a device comprising: a processor; and memory, wherein using the memory, the processor performs the method of Claim 1. Chanda teaches a device comprising: a processor; and memory, wherein using the memory, the processor performs the method (host memory subsystem 1826 contains computer instructions which, when executed by the processor subsystem 1814, cause the computer to perform functions as described herein, [0233]). Thus, Claim 13 is rejected under the same rationale as Claim 1.
As per Claim 14, Chanda teaches a non-transitory computer-readable recording medium having recorded thereon a program for causing a computer to execute the work support method (a non-transitory computer readable storage medium impressed with computer program instructions, the computer program instructions, when executed on a processor, can implement any of the above-described methods, [0014]).
Claim(s) 2, 3, 10, and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chanda (US 20200272324A1) and Lyren (US 20200074746A1) in view of Leland (US 20220141265A1).
As per Claim 2, Chanda and Lyren are relied upon for the teachings as discussed above relative to Claim 1.
However, Chanda and Lyren do not teach wherein the first information includes at least the sound information, and the determining is conducted based on a result of an analysis obtained by analyzing content of the speech by the at least one user based on the sound information. However, Leland teaches wherein the first information includes at least the sound information, and the determining is conducted based on a result of an analysis obtained by analyzing content of the speech by the at least one user based on the sound information (the user interface of the control user may provide a voice command that signals for distribution of content, participants other than the control user may generate such an event through a similar user interface option provided by the collaboration tool, such requests for distribution of shared content by a non-control participant may initiate distribution of the content to just the requesting user, while in other instances such requests by non-control participant may initiate distribution of the content to all participants, [0046]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chanda and Lyren so that the first information includes at least the sound information, and the determining is conducted based on a result of an analysis obtained by analyzing content of the speech by the at least one user based on the sound information as suggested by Leland. It is well-known in the art to provide voice commands which a device receives and recognizes based on voice recognition techniques, and it is well-known in the art that this makes it easy for a user to input commands by simply speaking, without having to type, click, move, or otherwise manipulate an input device.
As per Claim 3, Chanda and Lyren do not teach wherein the determining includes: determining whether either a group work mode in which the plurality of users work in a coordinated manner or an individual work mode in which the plurality of users work individually is active for each of time sections based on the first information; and determining that the manipulation by the target user in each of the time sections in which the group work mode is determined to be active is to be applied to the one or more other users and that the manipulation by the target user in each of the time sections in which the individual work mode is determined to be active is not to be applied to the one or more other users. However, Leland teaches a timeseries 320 that illustrates the participation of various users in a collaborative session, such as a virtual meeting, where the timeseries 320 illustrates the duration of each user’s participation and also illustrates points of time at which specific sources of content are shared via the collaborative session. A virtual meeting is conducted under the control of participants that share content with other participants in the virtual meeting. At time 305a, the collaborative session is initiated with four users participating. The users each participate in the collaborative session via a collaborative tool that supports a workspace that allows content to be displayed for viewing by all participants [0044]. The collaborative session that is initiated at time 305a continues until time 305b when the control user opens two files that are then used to display content that is shared with other collaborative session participants, namely users A, B and C [0045]. At time 305c, an event triggering distribution of shared content is detected. The control user generates such an event through selection of a user interface option provided by the collaboration tool. The user interface of the control user provides a voice command that signals for distribution of content that has been shared thus far in a virtual meeting [0046]. The collaboration tool automatically initiates distribution of shared content upon detecting a threshold level of sharing of that particular content during a virtual meeting. The event at time 305c is generated automatically by the collaboration tool upon detecting ongoing sharing of content from these two files since the time the files were opened by the control user at time 305b. The use of such time thresholds for initiating automatic sharing may be selected in order to exclude sharing of content that is only briefly displayed, while including content that is displayed for significant intervals [0047]. Upon distribution of the two files, the virtual meeting continues with the closing of the two files by the control user at time 305d. At time 305e, the control user opens a third file for display to the participants. At time 305f, another event that triggers distribution of shared content is detected [0049]. Fig. 3 shows the timeseries 320 that shows that the collaboration begins at time 305a. After time 305a, it shows the time sections where manipulation by the target user is applied to other users. It shows that at time 305f, the collaboration ends. Thus, before the virtual meeting and the collaboration begins at time 305a, and also after the virtual meeting and the collaboration ends at time 305f, it is in an individual work mode in which the plurality of users work individually is active, and while it is in the individual work mode, the manipulation by the target user is not to be applied to the one or more other users. During the virtual meeting, which is after the collaboration begins at time 305 and before the collaboration ends at time 305f, it is in a group work mode in which the plurality of users work in a coordinated manner is active, and while it is in the group work mode, the manipulation by the target user is to be applied to the one or more other users. Thus, Leland teaches wherein the determining includes: determining whether either a group work mode in which the plurality of users work in a coordinated manner or an individual work mode in which the plurality of users work individually is active for each of time sections based on the first information; and determining that the manipulation by the target user in each of the time sections in which the group work mode is determined to be active is to be applied to the one or more other users and that the manipulation by the target user in each of the time sections in which the individual work mode is determined to be active is not to be applied to the one or more other users [0044-0047, 0049] (Fig. 3).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chanda and Lyren so that the determining includes: determining whether either a group work mode in which the plurality of users work in a coordinated manner or an individual work mode in which the plurality of users work individually is active for each of time sections based on the first information; and determining that the manipulation by the target user in each of the time sections in which the group work mode is determined to be active is to be applied to the one or more other users and that the manipulation by the target user in each of the time sections in which the individual work mode is determined to be active is not to be applied to the one or more other users because Leland suggests that it is well-known in the art for users to be allowed to edit information that is shared during a virtual meeting, but when the virtual meeting is over, then the users will want to work on their devices individually without sharing it with anyone else [0003, 0044-0047, 0049].
As per Claim 10, Chanda and Lyren do not teach wherein the at least one specific user is determined based on at least one of information indicating positions of the one or more other users in the virtual space or information indicating attributes of the one or more other users. However, Leland teaches wherein the at least one specific user is determined based on at least one of information indicating positions of the one or more other users in the virtual space or information indicating attributes of the one or more other users (determination to distribute the information to the first IHS of the first user is based on the first user being detected in proximity to the second IHS of a second user participating in the collaborative session, [0005], in a scenario where a participant is determined to not be in proximity to the IHS throughout the display of the shared content, the collaboration tool may be configured to omit such users from the distribution of the shared content, [0063]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chanda and Lyren so that the at least one specific user is determined based on at least one of information indicating positions of the one or more other users in the virtual space or information indicating attributes of the one or more other users because Leland suggests that this way, there is a secure execution environment that only shares with authenticated users who are identified in the proximity, and thus prevents sharing with outside users who are not in the proximity who are not authenticated [0029-0031, 0005, 0063].
20. As per Claim 11, Chanda and Lyren do not teach wherein the first information includes at least the schedule information, and the schedule information includes information indicating a time period during which the group work mode is active and a time period during which the individual work mode is active. However, Leland teaches wherein the first information includes at least the schedule information, and the schedule information includes information indicating a time period during which the group work mode is active and a time period during which the individual work mode is active [0044-0047, 0049] (Fig. 3). This would be obvious for the reasons given in the rejection for Claim 3.
21. Claim(s) 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chanda (US 20200272324A1), Lyren (US 20200074746A1), and Leland (US 20220141265A1) in view of Stevens (US 20070250506A1).
22. As per Claim 4, Chanda, Lyren, and Leland are relied upon for the teachings as discussed above relative to Claim 3.
However, Chanda and Lyren do not teach wherein the determining further includes: when the group work mode is determined to be active, determining that the manipulation by the target user is to be applied to the one or more other users. However, Leland teaches wherein the determining further includes: when the group work mode is determined to be active, determining that the manipulation by the target user is to be applied to the one or more other users, as discussed in the rejection for Claim 3.
However, Chanda, Lyren, and Leland do not teach wherein the group work mode is determined to be active, determining whether the target user is a presenter; and determining that the manipulation by the target user is to be applied to the one or more other users when the target user is determined to be the presenter and that the manipulation by the target user is not to be applied to the one or more other users when the target user is determined not to be the presenter. However, Stevens teaches determining whether the target user is a presenter; and determining that the manipulation by the target user is to be applied to the one or more other users when the target user is determined to be the presenter and that the manipulation by the target user is not to be applied to the one or more other users when the target user is determined not to be the presenter (collaboration system may only allow presenters in the meeting to edit the resources that are being shared in the meeting, if the collaboration system determines that the requestor does not have sufficient privileges to edit the resource, then, the collaboration system reports the error condition, the collaboration system may inform the requester of the denial of the request due to lack of sufficient privileges, the collaboration system may provide the presenters in the meeting a notification of the failed attempt to edit the resource, [0024]). Since Leland teaches wherein the determining further includes: when the group work mode is determined to be active, determining that the manipulation by the target user is to be applied to the one or more other users, as discussed in the rejection for Claim 3, this teaching from Stevens can be implemented into the device of Leland so that the group work mode is determined to be active, determining whether the target user is a presenter; and determining that the manipulation by the target user is to be applied to the one or more other users when the target user is determined to be the presenter and that the manipulation by the target user is not to be applied to the one or more other users when the target user is determined not to be the presenter.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chanda, Lyren, and Leland so that the group work mode is determined to be active, determining whether the target user is a presenter; and determining that the manipulation by the target user is to be applied to the one or more other users when the target user is determined to be the presenter and that the manipulation by the target user is not to be applied to the one or more other users when the target user is determined not to be the presenter because Stevens suggests that presenters are the ones who are presenting the presentation during the meeting, and thus they are allowed to manipulate the presentation, and only the presenters are allows to manipulate the presentation in order to prevent other users from changing the presentation in a way that the presenters did not want to present [0024].
23. As per Claim 5, Chanda, Lyren, and Leland do not teach wherein the first information includes at least the input information, and the input information includes information indicating whether the target user is the presenter. However, Stevens teaches wherein the first information includes at least the input information, and the input information includes information indicating whether the target user is the presenter [0024]. This would be obvious for the reasons given in the rejection for Claim 4.
24. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chanda (US 20200272324A1) and Lyren (US 20200074746A1) in view of Wang (US 20230205915A1).
Chanda and Lyren are relied upon for the teachings as discussed above relative to Claim 7.
However, Chanda and Lyren do not teach wherein the at least one specific user is determined based on information indicating attributes of the one or more other users. However, Wang teaches wherein the at least one specific user is determined based on information indicating attributes of the one or more other users (digital component providers benefit from being able to restrict the audience to whom digital components are displayed, such restrictions can be based on demographics of the audience, provide digital component provides the ability to distribute digital components to particular groups of users, e.g., users of particular demographic groups, [0003]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chanda and Lyren so that the at least one specific user is determined based on information indicating attributes of the one or more other users because Wang suggests that this is beneficial for users because they can be displayed more relevant content [0003].
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONI HSU whose telephone number is (571)272-7785. The examiner can normally be reached M-F 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JH
/JONI HSU/Primary Examiner, Art Unit 2611