DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This is in response to applicant’s amendment/response filed on 01/23/2025, which has been entered and made of record. Claims 1, 10, and 18 have been amended. No Claim has been cancelled. No Claim has been added. Claims 1-20 are pending in the application.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 10, and 18, and the dependent claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s arguments directed to amended limitation have been addressed in the detail rejection below with new reference by Valdivia and Chiba.
The arguments regarding dependent claims for the virtue of their dependency are moot because the independent claims are not allowable.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of `35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6, 9, 10-12, 14, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fan (WO 2019218815 A1), and in view of Valdivia et al. (US 20180098059 A1), and further in view of Chiba (US 20170308268 A1).
Regarding Claim 10, Fan discloses An electronic device (¶7 reciting “a method, an apparatus, a computer device, and a computer-readable storage medium for displaying marker elements in a virtual scene”) acting as a first terminal (Fig. 6 showing a first terminal) and comprising a processor and a memory, the memory storing at least one instruction, and the at least one instruction being loadable and executable by the processor and causing the electronic device to implement a method including: (¶22 reciting “A computer device comprises a processor and a memory, wherein the memory stores at least one instruction, at least one program, a code set or an instruction set, and the at least one instruction, at least one program, the code set or the instruction set is loaded and executed by the processor to implement the marking element display method in the above-mentioned virtual scene.”)
displaying a scene picture of a virtual scene; (Fig. 6. ¶84-85 reciting “Step 601: A first terminal displays a first display interface. . . Among them, the first display interface can display the picture when observing the virtual scene from the viewing direction corresponding to the first virtual object.”)
detecting an aiming operation performed on a target virtual object in the scene picture of the virtual scene by a first virtual object in the virtual scene controlled by the first terminal using an aim point of a virtual sight (¶88 disclosing an aiming operation performed on a target virtual object in the scene picture, and reciting “As shown in Figure 7, the virtual scene 70 displayed by the terminal includes the current virtual object 71 and the crosshair icon 72 corresponding to the current virtual object 71. . . In a shooting game scene, the crosshair icon 72 can also indicate the direction in which the weapon held by the current virtual object 71 is aimed. When the user wants to mark a virtual object, for example, to mark a certain ground, the user can adjust the character's perspective so that the crosshair icon 72 is aimed at the ground, and then perform a quick operation (i.e., the abovementioned marking operation). For example, after pressing the shortcut Q key, the first terminal receives the user's marking operation on the ground.” In the example disclosed in ¶88, user’s quick operation to aim at the virtual ground reads on an aiming operation and the virtual ground reads on a target virtual object in the scene picture. Further, Fan teaches a first virtual object in the virtual scene controlled by the first terminal using a virtual aiming tool (i.e. the user can adjust the character's perspective so that the crosshair icon 72 is aimed at the ground),
displaying description information of the target virtual object in the scene picture of the virtual scene; (¶89 disclosing in response to an aiming operation, displaying description information of the target virtual object (i.e. a mark type selection interface), and reciting “the first terminal may also display a mark type selection interface upon receiving the mark operation, wherein the mark type selection interface includes at least two mark options, each of which corresponds to a mark type;” Further, ¶90 reciting “As shown in Figure 8, the terminal displays a virtual scene 80. After the user performs a marking operation (such as pressing the shortcut Q key), the terminal superimposes and displays a marking type selection interface 81 on the virtual scene 80. . . For example, in Figure 8, the marking options included in the marking type selection interface 81 may include options corresponding to gun-shaped marking elements, options corresponding to grenade-shaped marking elements, and options corresponding to dagger-shaped marking elements, etc.”) and
in response to a touch operation performed on the description information, displaying marking information including position information marking a position of the target virtual object in the scene picture of the virtual scene. (¶93 disclosing an operation performed on the target marking type interface, and reciting “when the user corresponding to the first terminal selects a target marking type of a target virtual object when marking a virtual object, the first terminal may send a marking request including an identifier of the target virtual object and the target marking type to the server.” Further, ¶111 reciting “if the user corresponding to the first terminal selects the option corresponding to the gun-shaped marking element in the interface shown in FIG8 , then in this step, the first terminal and/or the second terminal will also obtain the gun-shaped marking element accordingly.” ¶117 disclosing displaying the marking information, and ¶126 reciting “The display box 1003 displays a numerical text of the distance between the virtual object corresponding to the marking element 1002 and the current virtual object 1001 (displayed as 85m in Figure 10)”. Fan discloses “During the display of the virtual scene, the capacitive touch system 150 may detect the touch operation performed by the user when interacting with the virtual scene.” (¶54). Further, Fan recites “The user selects the type of marking element to be set through selection operations, such as . . . touch clicks . . .” (¶90). Therefore, Fan does disclose the operation performed on the description information is a touch operation.)
However, Fan does not explicitly disclose in response to a determination that a distance between a position of the aim point and a display position of the target virtual object is less than or equal to a predetermined distance, displaying description information of the target virtual object.
Valdivia teaches “controls and interfaces for user interactions and experiences in a virtual reality environment.” (¶2). More specifically, ¶208 recites “a computing system may send information configured to render a first reticle on a display device, the first reticle being superimposed over a rendered virtual space, wherein the reticle is directed at a first focal point on a region of a rendered virtual space. At step 4220, the computing system may receive an input configured to move the reticle from the first focal point to a second focal point, wherein the second focal point is within a threshold distance of a hit target that is associated with a particular virtual object or interactive element. At step 4230, the computing system may select, from a plurality of reticle types, a particular reticle type”. See Figs. 9A-9D, the reticle changes from 920 to 930, and from 950 to 970 in response to a determination that a distance between the position of the aim point (i.e. the reticle) and a display position of the target virtual object (910 or 960) is less than a threshold distance.
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to combine the teachings from Fan and Valdivia to display description information of the target virtual object in the scene picture (taught by Fan) in response to a determination that a distance between the position of the aim point (i.e. the reticle) and a display position of the target virtual object (910 or 960) is less than a threshold distance (taught by Valdivia). The suggestions/motivations would have been that “The concepts of reach and distance may be useful in making the virtual world more similar to the real world and making interactions in the virtual world more intuitive.” (¶140), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
However, Fan in view of Valdivia does not explicitly disclose displaying marking information including a name of the target virtual object.
Including a name of the target virtual object in the marking information would have been a simple substitution of one known element for another to obtain predictable results. In addition, Chiba teaches “if the touch position is a position where an icon is displayed (step 1308), basic information 1101 such as a name of the object of the touched icon and a selection menu 1102 representing a list of the selection items obtained by defining predetermined manipulations allowable for the object in advance are displayed overlappingly on the navigation screen (Step 1322).” (¶119).
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the device (taught by Fan in view of Valdivia) to include a name of the target virtual object in the displayed information (taught by Chiba). The suggestions/motivations would have been to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding Claim 11, Fan in view of Valdivia and Chiba discloses The electronic device according to claim 10, wherein the method further comprises:
in response to the touch operation performed on the description information, transmitting the marking information to a second terminal controlling a teammate of the first virtual object. (Fan, ¶81 reciting “when the user is in team mode, the virtual objects marked by the user in the virtual scene can be shared with teammates' terminals for display of the marked elements.” ¶96 reciting “the second terminal may be a terminal used by a friendly user (such as a teammate) of the user corresponding to the first terminal.” ¶102 reciting “Step 605: When it is detected that the target virtual object is within the visible distance of the current virtual object, the server sends marking indication information to the corresponding terminal, and the first terminal and/or the second terminal receives the marking indication information.” In addition, ¶90 teaching a touch operation performed on the description information.)
Regarding Claim 12, Fan in view of Valdivia and Chiba discloses The electronic device according to claim 11, wherein the method further comprises:
in response to the touch operation performed on the description information, causing a display of the marking information of the target virtual object in a scene picture of the virtual scene on the second terminal. (Fan, ¶116 reciting “Step 608: The first terminal and/or the second terminal displays the marking element at a designated position around the target virtual object in the display interface of the virtual scene.” In addition, ¶90 teaching a touch operation performed on the description information.)
Regarding Claim 14, Fan in view of Valdivia and Chiba discloses The electronic device according to claim 10, wherein the displaying description information of the target virtual object in the scene picture comprises:
displaying the description information within a first distance from the target virtual object in the scene picture of the virtual scene.
(Fan, Figs. 7-88 showing the description information 81 within a first distance from the target virtual object 72)
Regarding Claim 17, Fan in view of Valdivia and Chiba discloses The electronic device according to claim 10, wherein the touch operation comprises a tap operation, a double-tap operation, or a long-press operation on a touch display screen of the electronic device. (Fan, ¶90 reciting “The user selects the type of marking element to be set through selection operations, such as mouse clicks, touch clicks”, where the touch clicks read on a tap operation.)
Regarding Claim 4, Fan in view of Valdivia and Chiba discloses The method according to claim 1, wherein the description information comprises icon of the target virtual object. (Fan, ¶126 reciting “As shown in Figure 10, the virtual scene 100 displayed by the terminal includes a current virtual object 1001 and a marking element 1002 of the virtual object (i.e., the inverted triangle icon in Figure 10).”)
Claim 1, has similar limitations as of Claim(s) 10, therefore it is rejected under the same rationale as Claim(s) 10.
Claim 2, has similar limitations as of Claim(s) 11, therefore it is rejected under the same rationale as Claim(s) 11.
Claim 3, has similar limitations as of Claim(s) 12, therefore it is rejected under the same rationale as Claim(s) 12.
Claim 6, has similar limitations as of Claim(s) 14, therefore it is rejected under the same rationale as Claim(s) 14.
Claim 9, has similar limitations as of Claim(s) 17, therefore it is rejected under the same rationale as Claim(s) 17.
Claim 18, has similar limitations as of Claim(s) 10, therefore it is rejected under the same rationale as Claim(s) 10.
Claim 19, has similar limitations as of Claim(s) 11, therefore it is rejected under the same rationale as Claim(s) 11.
Claim 20, has similar limitations as of Claim(s) 12, therefore it is rejected under the same rationale as Claim(s) 12.
Claim(s) 5 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fan in view of Valdivia and Chiba as applied to claims 10 and 1, and further in view of Zhou (US 20150149384 A1).
Regarding Claim 13, Fan in view of Valdivia and Chiba discloses The electronic device according to claim 10.
However, Fan in view of Valdivia and Chiba does not explicitly disclose wherein the method further comprises:
canceling the display of the description information when the aiming operation performed on the target virtual object stops.
Zhou teaches “when the mouse moves out of the exhibition area of the published information, the front-end controller will cancel an operation of displaying user assessment information of a product, and the displayed user assessment information will disappear.” (¶26).
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to combine the teachings from Fan in view of Valdivia and Chiba and Zhou to cancel the display of the description information when the aiming operation stops (corresponding the mouse moving out of the exhibition area). The suggestions/motivations would have been that “the information can be checked simply, conveniently and flexibly, thereby improving the click-through rate and conversion rate of the published information” (¶26), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Claim 5, has similar limitations as of Claim(s) 13, therefore it is rejected under the same rationale as Claim(s) 13.
Claim(s) 7-8 and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fan in view of Valdivia and Chiba as applied to claims 10 and 1, and further in view of Yang (WO 2019205839 A1, US 20200330866 A1 is used as the English translation for this rejection).
Regarding Claim 15, Fan in view of Valdivia and Chiba discloses The electronic device according to claim 10, wherein the marking information comprises distance prompt information, indicating a distance between the target virtual object and the first virtual object in the virtual scene. (Fan, ¶126 reciting “The display box 1003 displays a numerical text of the distance between the virtual object corresponding to the marking element 1002 and the current virtual object 1001 (displayed as 85m in Figure 10).”)
However, Fan in view of Valdivia and Chiba does not explicitly disclose the marking information indicating a real time distance.
Yang teaches “In step 201 to step 204, the terminal may obtain a distance between the first virtual object and the second virtual object in real time, and update and display location indication information of the second virtual object according to a change of the distance.” (¶89). In other words, Yang teaches the displayed distance is updated in real time.
It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the device (taught by Fan in view of Valdivia and Chiba) to obtain and display real time distance information (taught by Yang). The suggestions/motivations would have been “to resolve the problems of non-intuitive display effect, and/or low display efficiency” (¶5), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results.
Regarding Claim 16, Fan in view of Valdivia and Chiba and Yang discloses The electronic device according to claim 10, wherein the displaying marking information of the target virtual object in the scene picture of the virtual scene comprises dynamically displaying the marking information at a position of the scene picture of the virtual scene according to a relative position of the target virtual object from the first virtual object. (Fan, ¶80 reciting “From the perspective of the content displayed on the terminal interface, through the scheme shown in Figure 3, the terminal can display the display interface of the virtual scene and control the movement of the virtual object in the virtual scene, such as at least one of movement and rotation; when there is a target virtual object in the display interface, the terminal displays a marking element at a specified position around the target virtual object in the display interface.” Also see Claim 15 rejections.)
Claim 7, has similar limitations as of Claim(s) 15, therefore it is rejected under the same rationale as Claim(s) 15.
Claim 8, has similar limitations as of Claim(s) 16, therefore it is rejected under the same rationale as Claim(s) 16.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YI WANG whose telephone number is (571)272-6022. The examiner can normally be reached 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YI WANG/Primary Examiner, Art Unit 2619