DETAILED ACTION
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-5, 17-19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al., US 2020/0215430, in view of Tian et al., US 2021/0291054, and Avent et al., US 2012/0028700.
In Reference to Claims 1, 17, and 18
Takahashi et al. teaches an electronic device with a processor, storage medium, and a bus where a graphical user interface is provided by the electronic device and where machine-readable instructions executable by the processor is in communication with the storage medium through the bus, a non-transitory computer readable storage medium having a computer program stored thereon, where a graphical user interface is provided by a terminal device, and the computer program when run by a processor, executes operations, and a method for displaying information of a virtual object, wherein a graphical user interface is provided by a terminal device (Fig. 2A, Fig. 5, Par. 21, 23, 48-52 which teach the electronic device and non-transitory computer readable storage medium, and Par. 25-26 which teaches displaying an user interface for game annotation information) and wherein the method comprises, displaying in the graphical user interface a first virtual scene and a first virtual object located in the first virtual scene (Par. 25-26 which teaches displaying a user interface. Par. 12-13 which teaches the user interface can display the virtual environment of a game, including a 3d environment and which teaches digital game objects within the displayed virtual environment); displaying in the graphical user interface note prompt information of at least one second virtual object in response to a note addition operation (Fig. 3 and Par. 25-26 which teaches where a user selects a game object from their user interface and uses the user interface to create an annotation for the game object. See in particular Par. 26 “For example the UI may present a list of game assets associated with the game” and “As part of operation 304, the first user can add text and select options for the note”); and adding note information to a target virtual object among the at least one second virtual object displayed in response to a trigger operation for the note prompt information (Fig. 3 and Par. 25-26 which teaches adding the annotation).
Further, Takahashi et al. teaches a 3d virtual environment as described above and where user of the software can be game players (Par. 19) and the user device can be for playing the game (Par. 20). However, Takahashi et al. does not explicitly teach where in response to a movement operation for the first virtual object, controlling the first virtual object to move in the first virtual scene, and controlling, according to movement of the first virtual object, a first virtual scene range displayed in the graphical user interface to change accordingly, where the added note information comprises information obtained by a player controlling the first virtual object to move through inference regarding a virtual object controlled by another player, or in response to a preset trigger event, displaying, in the graphical user interface of the terminal device corresponding to the first virtual object, a second virtual scene corresponding to a discussion stage, and displaying, for a target virtual object in the second virtual scene, a visual marker corresponding to the note information.
Tian et al. teaches a game system which allows players to mark and make annotations to game objects during play of a game (Fig. 9 and 13, Par. 3 and Par. 158-174) and where in response to a movement operation for the first virtual object, controlling the first virtual object to move in the first virtual scene, and controlling, according to movement of the first virtual object, a first virtual scene range displayed in the graphical user interface to change accordingly (Abstract, Fig. 1 and Par. 34-37, 52 and 55, where a user controls a playable character to move around in a virtual environment and view different game objects such as in a battle royale game).
It would be desirable to modify the system of Takahashi et al. to include allowing users to control movement of a character around the game environment during gameplay and annotation of game objects during gameplay as taught by Tian et al. in order to use the contextual collaboration system of Takahashi et al. to allow for improved teamwork and collaboration between game players in games, during their game play session similar to the marking system taught by Tian et al., by having access to more detailed shared information about the game environment thanks to the annotations.
Avent et al. teaches a game system for making notes in gameplay between players which teaches where the added note information comprises information obtained by a player controlling the first virtual object to move through inference regarding a virtual object controlled by another player (Fig. 4B – Fig. 5 and Par.65-67, 72, and 74-76 which teaches where a first player Bob controls their avatar character to finish a level and then enters a spectating stage where Bob views gameplay of a second player Alice and can make annotations of game objects such as the pipe in Fig.5 by “inferring” from the second players gameplay that they might miss a game area, and that that the second player might like a hint in the form of an annotation), or in response to a preset trigger event, displaying, in the graphical user interface of the terminal device corresponding to the first virtual object, a second virtual scene corresponding to a discussion stage, and displaying, for a target virtual object in the second virtual scene, a visual marker corresponding to the note information (Fig. 4B – Fig. 5 and Par.65-67, 72, and 74-76 which teaches a “milestone” where Bob switches game view to spectating the player character of Alice and can provide game annotations of objects that are shown pointing to particular game objects marked in the UI, see Fig. 5. Examiner considers this spectating view where the players can provide hints and advice to constitute “a discussion stage.”).
It would be desirable to modify the system of Takahashi et al. and Tian et al. to include a spectator “discussion stage” and providing annotation and hints inferred based on the gameplay as taught by Avent et al. in order to allow players to make notes based on observe other players play as well as their own, recognizing different behavior in the other player which may warrant make a note, such as the hint of Avent et al., and allowing this information to be view and potentially shared as points in the game appropriate for the game action.
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing of the invention to modify the system of Takahashi et al. to include allowing users to control movement of a character around the game environment during gameplay and annotation of game objects during gameplay as taught by Tian et al, and to modify the system of Takahashi et al. and Tian et al. to include a triggered “discussion stage” and providing annotation and hints inferred based on the gameplay of other players as taught by Avent et al.
In Reference to Claims 2 and 19
Takahashi et al., Tian et al. and Avent et al. teaches in response to the note addition operation, displaying a note list in the graphical user interface, and displaying in the note list the note prompt information of the at least one second virtual object (Takahashi et al. Par. 26 “list of game assets” and which teaches a user interface to generate the note when selecting the game object from the list).
In Reference to Claim 4 and 21
Takahashi et al., Tian et al. and Avent et al. teaches displaying in the note list the note prompt information of the at least one second virtual object according to a display priority of second virtual object, wherein the display priority is determined according to a distance between the at least one second virtual object and the first virtual object (Takahashi et al. Par. 27 “In accordance with an embodiment, the client module 106 displays data from a selection of notes wherein the selection is determined by one or more of: a user 130 via an input device 108, or by the module 106 (e.g., determined by game objects visible in the environment as seen via the UI).” Where Tian et al. teaches the user controlling a first game object to move around a virtual environment and viewing the game environment from that character as described above in reference to Claims 1 and 17. For the combination of Takahashi et al. and Tian et al. examiner considers the game objects visible in the environment to be a display priority based on distance. See also Tian et al. Fig. 2 and Par. 50 “As shown in (d) of FIG. 2, mark information 204 is displayed in the position of the virtual item. For example, the mark information 204 includes: an image of the virtual item and a distance between the virtual item and the virtual object.”).
In Reference to Claim 5
Takahashi et al. teaches displaying in the note list the at least one second virtual object and a note control, and displaying in the note control the note prompt information of the at least one second virtual object (Par. 26 which teaches using a user interface and UI windows to add notes to game objects and Par. 27 “In accordance with an embodiment, the client module 106 displays data from a selection of notes wherein the selection is determined by one or more of: a user 130 via an input device 108” where examiner considers a user input to input and display the note prompt information a note control).
Claims 3 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al., US 2020/0215430, Tian et al., US 2021/0291054, Avent et al., US 2012/0028700, and further in view of Fear et al., US 2020/0306638.
In Reference to Claims 3 and 20
Takahashi et al., Tian et al., and Avent et al. teach a method and device as described above in reference to Claims 2 and 19 including displaying in the note list note prompt information of at least one third virtual object (Par. 26-27 which teaches a plurality of notes on a plurality of objects). Further Takahashi et al. teaches that the game objects can be anything including characters (Par. 13). However, they do not explicitly where the at least one second virtual object is a virtual object in an alive state, and the at least one third virtual object is a virtual object in a dead state.
Fear et al. teaches a game system for a shooting game which includes providing information to a player about game objects and where some objects are in alive state and some objects are in a dead state (Fig. 1 and Par. 43).
It would be desirable to modify the method and device of Takahashi et al., Tian et al., and Avent et al. to include annotation about game characters in the alive state or the dead state as taught by Fear et al. in order to allow players to share notes about other players in the multiplayer game including both living players as well as dead players in order to share game information for game genres where players or enemies are killed and defeated. For example in a shooting game a player may want to make a note about a living game object saying “This enemy is using a sniper rifle” to warn teammates about the tactics of a currently living enemy, whereas they may want to make a note saying on a corpse of an enemy saying “There is a good rifle on this dead enemy, come and loot it from them” to better appraise other players of the game state similar to the marks described in Tian et al.
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing of the invention to modify the method and device of Takahashi et al., Tian et al., and Avent et al. to include annotation about game characters in the alive state or the dead state as taught by Fear et al.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al., US 2020/0215430, in view of Tian et al., US 2021/0291054, Avent et al., US 2012/0028700, further in view of gamefaqs.gamespot.com website titled “Among Us Sus Tracker” (Hereinafter Tracker) which described functionality of a note information program for the game “Among Us.”
In Reference to Claim 7
Takahashi et al. and Tian et al. teach a method as described above in reference to Claims 1 and 6 including note information and adding note information to a target virtual object among at least one second virtual objects displayed in response to a trigger operation for the note prompt information. Further Takahashi et al. teaches that notes can consist of a variety of types of information including text and images (Par. 25) and that more complex annotations could be implemented (Par. 30). However, they do not explicitly teach where the note prompt information comprises a plurality of pieces of identity information configured to indicate an identity of a virtual object, and the note information comprises an identity identifier, or in response to a selection operation for target identity information, displaying an identity identifier corresponding to the selected target identity information at a preset position corresponding to the target virtual object.
Tracker teaches gameplay notes software (Page 2 which teaches an Android App) where the note prompt information comprises a plurality of pieces of identity information configured to indicate an identity of a virtual object, and the note information comprises an identity identifier, or in response to a selection operation for target identity information, displaying an identity identifier corresponding to the selected target identity information at a preset position corresponding to the target virtual object (See the Screenshot on Page 1 that is an enlarged copy of the image linked in the body of the Gamefaqs post where a note list for tracking game information is provided. See Page 2 which teaches “Alright folks, I wrote a small Android app to help keep quick notes while playing Among Us.” And “This app helps keep track. If you think someone is safe, you can mark them safe. If they've done a visual, you can confirm it. It also lets you mark players as dead or you can remove them altogether if they are not playing. It also has a notepad for additional details if you need them.” See in particular where at a specific location in the list in association which each player there is a user interface to enter identity information such as indicating if the particular player is “Out,” “Safe,” “Sus” (i.e. suspicious), or “Imp” (i.e. Imposter). Examiner considers this identity information and displaying identity information at a preset position corresponding to the target virtual object.).
It would be desirable to modify the combination of Takahashi et al., Tian et al., and Avent et al. to include identity tracking notes as taught by Tracker in order to increase the enjoyment of game players by providing note taking functionality to aid them in playing social identity tracking or “Werewolf” style games such as Among Us. Particular where players may otherwise struggle remembering details during such as game as described in Tracker.
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to modify the combination of Takahashi et al., Tian et al., and Avent et al. to include identity tracking notes as taught by Tracker.
Claim(s) 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al., US 2020/0215430, Tian et al., US 2021/0291054, Avent et al., US 2012/0028700, Tracker, and further by evidence of computer program “Among Us” (Innersloth) hereinafter “Among Us”. The evidence constituting Among Us consists of 1) Among Us wiki webpage “Voting” (Voting) and 2) Among Us wiki webpage “Emergency Meeting” (Emergency Meeting).
In Reference to Claim 8
Takahashi et al., Tian et al., Avent et al. and Tracker teach a method as described above in reference to Claim 7. However, they do not explicitly teach when displaying the second virtual scene in the graphical user interface, displaying an identity voting control of the at least one second virtual object in the graphical user interface; and in response to a voting operation for the identity voting control, performing a corresponding voting instruction.
Among Us teaches a game which teaches when displaying a second virtual scene in the graphical user interface (See Emergency Meeting “Emergency meetings are events that occur once the emergency button is pressed or dead bodies are reported,” “During emergency meetings, players are teleported to either Cafeteria or Office, depending on the map, and are unable to move,” and “Whenever an emergency meeting is called, the player who called it will receive a megaphone icon on the right side of their name bar in the voting screen.”), displaying an identity voting control of the at least one second virtual object in the graphical user interface; and in response to a voting operation for the identity voting control, performing a corresponding voting instruction (See Voting “Every living player gets to cast a vote on who they think The Impostor(s) are, and whoever has the most votes, at the end of voting time or when all living players have voted, will get ejected. To vote, players must select the person they wish to vote or the "skip vote" button, then confirm by clicking the green checkmark. The selection can be canceled by clicking the red X, or changed by selecting another player or the "skip vote" button.” And “After everybody has voted or the time has run out, the votes will be shown, displaying the number of votes for each player, as well as skipping.” And “If there is a tie, or if the majority voted to skip, nobody will get ejected.” Which teaches a voting interface for the game).
It would be desirable to modify the method of Takahashi et al., Tian et al., Avent et al., and Tracker to include a voting interface at an emergency meeting scene as taught by Among Us in order to increase the enjoyment of the players by allowing the game system of Takahashi et al., Tian et al., and Tracker to implement a social identity tracking or “Werewolf” style games with player voting mechanics. Allowing players who enjoy that type of game to play it with the assistance of note tracking as taught by Takahashi et al., Tian et al., Avent et al., and Tracker.
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing of the invention to modify the method of Takahashi et al., Tian et al., Avent et al., and Tracker to include a voting interface at an emergency meeting scene as taught by Among Us.
In Reference to Claim 9
Takahashi et al. as modified by Tian et al., Avent et al., Tracker and Among Us teach wherein the second virtual scene comprises a plurality of virtual objects (Takahashi et al. Par. 26-27. Tracker Page 1), and the plurality of virtual objects comprise the first virtual object, the at least one second virtual object, and/or at least one third virtual object, wherein the at least one second virtual object is a virtual object in an alive state, and the at least one third virtual object is a virtual object in a dead state (Tracker page 1-2 “If you think someone is safe, you can mark them safe. If they've done a visual, you can confirm it. It also lets you mark players as dead”), and wherein the method further comprises in response to the note addition operation, displaying in the graphic user interface note prompt information of the at least one second virtual object and/or the at least one third virtual object; and in response to the trigger operation for the note prompt information, adding note information to a target virtual object among the at least one second virtual object and/or the at least one third virtual object displayed (Takahashi et al. Par. 26-27 and Tracker page 1-2).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al., US 2020/0215430, Tian et al., US 2021/0291054, Avent et al., US 2012/0028700, further in view of Tanzawa et al., US 2013/0331182.
In Reference to Claim 10
Takahashi et al., Tian et al., and Avent et al. teaches a method as described above in reference to Claim 1 including where the at least one second virtual object is a plurality of second virtual objects, and the step of displaying in the graphical user interface the note prompt information of the at least one second virtual object in response to the note addition operation (Par. 26-27). Further Takahashi et al. teaches in response to selecting objects displaying the note prompt information for selected objects (Par. 26-27). However, Takahashi et al. does not explicitly teach a multi-selection for a plurality of second virtual objects.
Tanzawa et al. teaches a multi-selection for a plurality of second virtual objects (Fig. 5 and Par. 76).
It would be desirable to modify the method of Takahashi et al. to include a multi-selection operation as taught by Tanzawa et al. in order to allow the user to save time by adding or viewing annotations for multiple selected objects at once.
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing of the invention to modify the method of Takahashi et al., Tian et al. and Avent et al., to include a multi-selection operation as taught by Tanzawa et al.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al., US 2020/0215430, Tian et al., US 2021/0291054, Avent et al., US 2012/0028700, further in view of Kruglick, US 2014/0092127.
In Reference to Claim 11
Takahashi et al., Tian et al., and Avent et al. teaches a method as described above in reference to Claim 1. Further Takahashi et al. teaches in response to the note addition operation, generating a note control and displaying in the note control the note prompt information of the at least one second virtual object (Takahashi Par. 26 which teaches using a user interface and UI windows to add notes to game objects and Par. 27 “In accordance with an embodiment, the client module 106 displays data from a selection of notes wherein the selection is determined by one or more of: a user 130 via an input device 108” where examiner considers a user input to input and display the note prompt information a note control). Further Takahashi et al. teaches where annotations can include images (Par. 25) and where the virtual objects are displayed on screen (Par. 13 and 20). However, Takahashi et al. does not teach where annotations include a screenshot image corresponding to the current graphical user interface, the screenshot image comprising particular media content on screen.
Kruglick teaches a system for annotation media content where annotations can include a screenshot image of the software1, the screenshot image comprising particular media content on screen (Par. 47 “For example the content provider and/or the communication network utilized for requesting and viewing the media file 202 may post a screenshot of the media with the overlaid comment 212” and further teach where the screenshot may include additional context information such as a date and time of the annotations).
It would be desirable to modify the system of Takahashi et al., Tian et al., and Avent et al. to include screenshot annotations as taught by Kruglick in order to provide users with greater information about user annotations by providing a screenshot image and timestamps which provide greater context for the situation and time at which the annotation was made by showing the commenters screen and how they were viewing the virtual object.
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing of the invention to modify the system of Takahashi et al., Tian et al., and Avent et al. to include screenshot annotations as taught by Kruglick.
Claim 12-13 is rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al., US 2020/0215430, Tian et al., US 2021/0291054, Avent et al., US 2012/0028700, Kruglick, US 2014/0092127, further in view of Yang et al., US 2013/0042171.
In Reference to Claim 12
Takahashi et al., Tian et al., Avent et al. and Kruglick teach a method as described above in reference to Claim 11, however, they do not teach in response to a drag operation for the note control, dragging the note control to a position where the target virtual object is located, to add the note prompt information corresponding to the dragged note control to the target virtual object.
Yang et al. teaches a system for adding annotations to objects in a program which teaches in response to a drag operation for the note control, dragging the note control to a position where the target virtual object is located, to add the note prompt information corresponding to the dragged note control to the target virtual object (Par. 57 “ In the present embodiment, a function to select an object from digital content displayed on a screen of the touch sensing display, which is referred to as an object selection function, and an annotation function to apply an annotation to the selected object may be provided.” And “When a menu associated with an annotation is selected after a predetermined object is selected through the object selection function or when a menu associated with an annotation is dragged and dropped to a desired object, the annotation function may be recognized.”).
It would be desirable to modify the method of Takahashi et al., Tian et al., Avent et al. and Kruglick et al. to use drag and drop adding of annotation as taught by Yang et al. in order to increase the enjoyment of the user by allowing for convenient and intuitive controls for adding annotations via touch gestures for when the game software is being operated on a computing device which primarily uses touch inputs such as a smartphone or tablet.
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing of the invention to modify the method of Takahashi et al., Tian et al., Avent et al. and Kruglick et al. to use drag and drop adding of annotation as taught by Yang et al.
In Reference to Claim 13
Takahashi et al. as modified by Tian et al., Avent et al., Kruglick and Yang et al. teach storing the screenshot image comprising the target virtual object with the added note information, and recording a note time (Kruglick Par. 47 which teaches Annotated screenshots with timestamps. Takahashi et al. Par. 27 which teaches storing and display annotations associated with target virtual objects); and in response to a note viewing operation, displaying in the graphical user interface the stored screenshot image according to the note time (Kruglick Par. 47 and Takahashi Par. 27).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al., US 2020/0215430, Tian et al., US 2021/0291054, Avent et al., US 2012/0028700, further in view of Black et al., US 2017/0262154.
In Reference to Claim 14
Takahashi et al., Tian et al., and Avent et al. teach a method as described above in reference to Claim 1. However, they do not explicitly teach displaying the added note information around the target virtual object.
Black teaches a system for annotations to objects in a virtual environment which includes displaying the added note information around the target virtual object (Fig. 5B and Par. 89-90 “Upon receiving the tag data, the tag 508A is displayed within the virtual environment A1 on the HMD 404A as being associated with the virtual item 502A. For example, a pointer of the tag 508A points towards the virtual item 502A. As another example, the tag 508A is located within a pre-determined distance from the virtual item 502A. As yet another example, the tag 508A is located closest to the virtual item 502A compared to all other virtual items in the virtual environment A1.”).
It would be desirable to modify the method of Takahashi et al., Tian et al., and Avent et al. to include displaying annotations around the virtual object as taught by Black et al. in order to assist the user in easily and intuitively understanding the association of a particular annotation with a particular virtual object in the virtual environment based on displaying them in proximity.
Therefore it would have been obvious to one of ordinary skill in the art at the time of filing of the invention to modify the method of Takahashi et al., Tian et al., and Avent et al. to include displaying annotations around the virtual object as taught by Black et al.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Takahashi et al., US 2020/0215430, Tian et al., US 2021/0291054, Avent et al., US 2012/0028700, Black et al., US 2017/0262154, further in view of Morel et al., US 2014/0274377.
In Reference to Claim 15
Takahashi et al., Tian et al., Avent et al. and Black teach a method as described above in reference to Claim 14 including displaying addition information about an object around the virtual object. However, they do not explicitly teach if the target virtual object and the first virtual object are teammates and attribute information of the target virtual object is known, displaying the added note information and the known attribute information.
Morel et al. teaches if the target virtual object and the first virtual object are teammates and attribute information of the target virtual object is known, displaying the added note information and the known attribute information (Par. 59).
It would be desirable to modify the method of Takahashi et al., Tian et al., Avent et al. and Black et al. to include displaying additional known information for selected teammates as taught by Morel et al. in order to assist players of the game by allowing them to easily gain and view more information out their teammates game situation when selecting them, such as in the team shooting game of Tian et al. Therefore, allowing players to play in a more strategic or effective manner by taking their teammates attribute information into account.
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing of the invention to modify the method of Takahashi et al., Tian et al., Avent et al., and Black et al. to include displaying additional known information for selected teammates as taught by Morel et al.
Response to Arguments
Applicant's arguments filed 12/23/2025 have been fully considered. New grounds of rejection incorporating the Avent et al. reference have been provided to better address the new scope of the amended claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARL V LARSEN whose telephone number is (571)270-3219. The examiner can normally be reached Monday through Friday; 10:00 am - 6:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dmitry Suhol can be reached at (571) 272-4430. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CARL V LARSEN/Examiner, Art Unit 3715