Prosecution Insights
Last updated: April 19, 2026
Application No. 18/283,382

GAME INTERFACE INTERACTION METHOD, SYSTEM, AND COMPUTER READABLE STORAGE MEDIUM

Non-Final OA §103§112
Filed
Mar 11, 2024
Examiner
FIBBI, CHRISTOPHER J
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Shanghai Lilith Computer Technology Co. Ltd.
OA Round
1 (Non-Final)
53%
Grant Probability
Moderate
1-2
OA Rounds
4y 3m
To Grant
90%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
199 granted / 376 resolved
-2.1% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
40 currently pending
Career history
416
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
62.9%
+22.9% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 376 resolved cases

Office Action

§103 §112
DETAILED ACTION Priority This action is in response to the U.S. filing dated 21 September 2023 which is a national stage entry of PCT/CN2021/132258, dated 23 November 2021, which claims a foreign priority date of 24 March 2021. A preliminary amendment was submitted on 11 March 2024. Claims 3, 4 and 10 have been amended. No claims have been added or cancelled. Claims 1-10 are pending and have been considered below. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the line segment representing remaining health points of game objects, the random selection of points on the line segment, selection of a falling interval, and the application of an operation instruction to the game object based on ranking, as described in claims 7 and 9, must be shown or the features canceled from the claims. No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claims 2, 3 and 4 are objected to because of the following informalities: claims 2, 3 and 4 continue referring to elements that were already established in previous claim dependencies as “a element” instead of “the element” (e.g. “a longitudinal calibration basis” [claims 2 and 4], “a starting point” [claims 2 and 4], “a distance vector” [claims 2 and 4], “a control module” [claim 2], “a volume” [claim 3], “a first volume control information” [claim 4], “a second volume control information” [claim 4]). Examiner suggests amending these recitations to “the” element. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Dependent claims 7 and 9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear how points are randomly selected on the line segment, it is also unclear whether these randomly selected “points” are “health points” and if they are a particular numerical health value or a coordinate on the line, and it is also unclear whether the weight ranking of the game object is based on the weights established for the operation instructions, therefore, it is indefinite. Claims Interpreted as Invoking 35 U.S.C. 112(f)/Sixth Paragraph The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “calculation module configured to define”, “control module configured to form”, “storage module configured to store”, “audio module configured to run”, “obtaining module configured to obtain”, “execution module configured to select”, statistics collection unit configured to form”, “determining unit configured to randomly select”, “execution unit configured to apply” in claims 8 and 9. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6, 8 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Shimizu (US 5,754,660) in view Neymotin et al. (US 2016/0071546 A1). As for independent claim 1, Shimizu teaches a method comprising: storing at least one game interface picture with an intelligent terminal running a game application program, wherein each game interface picture comprises at least two game scene units [(e.g. see Shimizu col 3 lines 23-41) ”a video game unit for generating images and sounds including music and sound effects for a game, and comprises an image processing unit 11 and an audio processing unit 12. An image memory 13 is connected to the image processing unit 11 through an address bus and a data bus. Further, an external memory 20 and an operating device/controller 30 are detachably connected to the image processing unit 11 … Image display data are provided to an image signal generation circuit 14. Specifically, the image processing unit 11 generates image display data for displaying one or a plurality of objects. Some of the objects have associated sounds such as music and/or a sound effect, e.g., a waterfall, a river, an animal, an automobile, an airplane, or the like”]. configuring a display interface, wherein the display interface corresponds to a display screen of the intelligent terminal [(e.g. see Shimizu col 1 lines 55-57) ”an image display device for displaying a three-dimensional image, realistic sounds are generated to correspond with changing three-dimensional images”]. defining a longitudinal calibration bases of each of the game scene units as a starting point, and calculating a distance vector between the display interface and each of the starting points [(e.g. see Shimizu col 4 lines 13-19, col 5 lines 14-18, col 7 lines 7-15, col 10 line 54 – col 11 line 3) ” The coordinate data fed to the audio processing unit 12 also includes … Y coordinate data representing the longitudinal/vertical direction of the display screen … Such relationships for controlling distance/direction and/or the sound volume may be predetermined, e.g., stored in a look-up table, or embodied as one or more equations stored in real time. In the look-up table embodiment, sound volume values of the waveform are stored in a table for each left and right unit distance centered around the position of the virtual camera (or the hero character) and read out using the current distance as an address to the table … the audio processing unit 12 finds the distance between the sound generating object and the virtual camera or the hero character on the basis of the first and second coordinate data, and determines the sound volume on the basis of the distance … the coordinate data storage area 15c sorts coordinate data of an object 1 generating sounds such as an enemy character or a waterfall as coordinate data of the object 1. The coordinate data storage area 15c stores coordinate data of an object 2 such as a virtual camera (or the hero character) whose line of sight moves to see the object 1 by an operator operating the controllers 30 as coordinate data of the object 2. When sounds are generated from the object 1, the M-CPU 51 calculates a direction to the object 1 as viewed from the object 2 and the distance therebetween on the basis of the coordinate data of the object 1 and the coordinate data of the object 2. Further, a program for producing three-dimensional sound effects from the characteristic views of FIGS. 3 to 6 is executed on the basis of the direction and the distance … the sound volume”]. running and playing at least two game audios within the game application program, wherein each game audio corresponds to a game scene unit [(e.g. see Shimizu col 2 lines 7-14, col 7 lines 47-48) ”A first digital-to-analog converter converts the first sound source data into an analog audio signal which is fed to a first sound generator, e.g., a left or right speaker. A second digital-to-analog converter converts the second sound source data read out from the temporary storage section into an analog audio signal which is fed to a second sound generator, e.g., the other of the left or right speaker … the sound volumes of the left and right audio signals may be similarly controlled”]. when the display interface moves laterally within the game interface picture, forming, by a control module of the intelligent terminal, an audio control instruction based on the distance vector, to change audio parameters of each of the game audios [(e.g. see Shimizu col 7 lines 22-41, col 10 line 54 – col 11 line 3 and Fig. 2) ”the sound volume of the left audio signal is set to a maximum amount and the sound volume of the right audio signal is set to zero when the sound generating object exists on the left side at an angle of 0.degree. as viewed from the virtual camera (or the hero character) (see FIG. 3). As the sound generating object moves to the right drawing a semicircle of radius "r" around the virtual camera (or the hero character) as shown in FIG. 2, the sound volume of the right audio signal is gradually increased and the sound volume of the left audio signal is gradually decreased, as indicated by the characteristic view of FIG. 3. When the sound generating object reaches the front of the virtual camera (or the hero character) at position at an angle of 90.degree. from the left side, the sound volumes of the left and right audio signals are made equal. Further, when the sound generating object moves right to reach a position on the right side of the virtual camera (or the hero character) at an angle of 180.degree. from the left side, the sound volume of the left audio signal is set to zero, and the sound volume of the right audio signal is set to the maximum amount … the coordinate data storage area 15c sorts coordinate data of an object 1 generating sounds such as an enemy character or a waterfall as coordinate data of the object 1. The coordinate data storage area 15c stores coordinate data of an object 2 such as a virtual camera (or the hero character) whose line of sight moves to see the object 1 by an operator operating the controllers 30 as coordinate data of the object 2. When sounds are generated from the object 1, the M-CPU 51 calculates a direction to the object 1 as viewed from the object 2 and the distance therebetween on the basis of the coordinate data of the object 1 and the coordinate data of the object 2. Further, a program for producing three-dimensional sound effects from the characteristic views of FIGS. 3 to 6 is executed on the basis of the direction and the distance … the sound volume”]. Shimizu does not specifically teach and when the display screen receives a sliding operation, the display interface moves laterally. However, in the same field of invention, Neymotin teaches: and when the display screen receives a sliding operation, the display interface moves laterally [(e.g. see Neymotin paragraphs 0033, 0070 and Figs. 8 and 11) ”FIG. 11 illustrates the ‘scrolling’ controls of an AVMT Video. When scrolling with a VR Headset or other controller, the user's field of view shifts linearly between the prepositioned screens 701, 702, 703; their video and soundtrack 801, 802, 803 fades between them 804, 805 … The sound is recorded in stereo, and fades in (rises in volume) starting from the direction that it is shifted in (e.g., when shifting from the screen on the headset's left 703 to the screen in the center 702, the sound rises from the left) and expanding to the other side. Likewise, the program's seamless interface means that the VR Headset's motion (or other device, set to “scrolling” FIG. 11) does not ‘select’ a new screen; instead, the videos are positioned in their assigned directions and the VR Headset 206 shifts between them linearly 1101”]. Therefore, considering the teachings of Shimizu and Neymotin, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add when the display screen receives a sliding operation, the display interface moves laterally, as taught by Neymotin, to the teachings of Shimizu because it provides a seamless interface which creates a more engaging and enjoyable experience for the user (e.g. see Neymotin paragraphs 0070, 0084). As for dependent claim 2, Shimizu and Neymotin teach the method as described in claim 1 and Shimizu further teaches: wherein the step of defining a longitudinal calibration basis of each of the game scene units as a starting point, and calculating a distance vector between the display interface and each of the starting points comprises: defining a central axis of each game scene unit as a longitudinal calibration bases, and defining a central axis of the display interface as a longitudinal reference basis [(e.g. see Shimizu col 4 lines 13-19, col 7 lines 7-15) ” The coordinate data fed to the audio processing unit 12 also includes … Y coordinate data representing the longitudinal/vertical direction of the display screen … Such relationships for controlling distance/direction and/or the sound volume may be predetermined, e.g., stored in a look-up table, or embodied as one or more equations stored in real time. In the look-up table embodiment, sound volume values of the waveform are stored in a table for each left and right unit distance centered around the position of the virtual camera (or the hero character) and read out using the current distance as an address to the table”]. calculating a first distance scalar and a second distance scalar between the longitudinal reference basis and two adjacent longitudinal calibration bases respectively [(e.g. see Shimizu col 5 lines 14-18) ”the audio processing unit 12 finds the distance between the sound generating object and the virtual camera or the hero character on the basis of the first and second coordinate data, and determines the sound volume on the basis of the distance”]. the step of forming, by a control module of the intelligent terminal, an audio control instruction based on the distance vector, based on the distance vector, to change audio parameters of each of the game audios comprises: forming, by the control module, an audio control instruction comprising volume control information based on a first ratio which is the first distance scalar to a distance between the two adjacent longitudinal calibration bases and a second ratio which is the second distance scalar to a distance between the two adjacent longitudinal calibration bases, wherein a volume of each game audio is adjusted respectively based on the volume control information [(e.g. see Shimizu col 7 lines 22-40, col 10 lines 52 – col 11 line 3) ”the coordinate data storage area 15c stores coordinate data of a sound generating object or the like displayed on a screen. For example, the coordinate data storage area 15c sorts coordinate data of an object 1 generating sounds such as an enemy character or a waterfall as coordinate data of the object 1. The coordinate data storage area 15c stores coordinate data of an object 2 such as a virtual camera (or the hero character) whose line of sight moves to see the object 1 by an operator operating the controllers 30 as coordinate data of the object 2. When sounds are generated from the object 1, the M-CPU 51 calculates a direction to the object 1 as viewed from the object 2 and the distance therebetween on the basis of the coordinate data of the object 1 and the coordinate data of the object 2. Further, a program for producing three-dimensional sound effects from the characteristic views of FIGS. 3 to 6 is executed on the basis of the direction and the distance to generate … the sound volume … the sound volume of the left audio signal is set to a maximum amount and the sound volume of the right audio signal is set to zero when the sound generating object exists on the left side at an angle of 0.degree. as viewed from the virtual camera (or the hero character) (see FIG. 3). As the sound generating object moves to the right drawing a semicircle of radius "r" around the virtual camera (or the hero character) as shown in FIG. 2, the sound volume of the right audio signal is gradually increased and the sound volume of the left audio signal is gradually decreased, as indicated by the characteristic view of FIG. 3. When the sound generating object reaches the front of the virtual camera (or the hero character) at position at an angle of 90.degree. from the left side, the sound volumes of the left and right audio signals are made equal. Further, when the sound generating object moves right to reach a position on the right side of the virtual camera (or the hero character) at an angle of 180.degree. from the left side, the sound volume of the left audio signal is set to zero, and the sound volume of the right audio signal is set to the maximum amount”]. As for dependent claim 3, Shimizu and Neymotin teach the method as described in claim 2 and Shimizu further teaches: wherein, the step that a volume of each game audio is adjusted respectively based on the volume control information comprises: obtaining, by the control module, a current volume of the intelligent terminal, and changing the volume of the game audio based on the following formulas: a first volume control information = (1-first ratio)*100%*current volume and a second volume control information = (1-second ratio)*100%*current volume [(e.g. see Shimizu col 2 lines 7-14, col 7 lines 7-10, 47-48 and Fig. 2) ”A first digital-to-analog converter converts the first sound source data into an analog audio signal which is fed to a first sound generator, e.g., a left or right speaker. A second digital-to-analog converter converts the second sound source data read out from the temporary storage section into an analog audio signal which is fed to a second sound generator, e.g., the other of the left or right speaker … the sound volumes of the left and right audio signals may be similarly controlled … Such relationships for controlling distance/direction and/or the sound volume may be predetermined, e.g., stored in a look-up table, or embodied as one or more equations stored in real time … the sound volume of the left audio signal is set to a maximum amount and the sound volume of the right audio signal is set to zero when the sound generating object exists on the left side at an angle of 0.degree. as viewed from the virtual camera (or the hero character) (see FIG. 3). As the sound generating object moves to the right drawing a semicircle of radius "r" around the virtual camera (or the hero character) as shown in FIG. 2, the sound volume of the right audio signal is gradually increased and the sound volume of the left audio signal is gradually decreased, as indicated by the characteristic view of FIG. 3. When the sound generating object reaches the front of the virtual camera (or the hero character) at position at an angle of 90.degree. from the left side, the sound volumes of the left and right audio signals are made equal. Further, when the sound generating object moves right to reach a position on the right side of the virtual camera (or the hero character) at an angle of 180.degree. from the left side, the sound volume of the left audio signal is set to zero, and the sound volume of the right audio signal is set to the maximum amount”]. Examiner notes that, as described and depicted, the left/right audio volume mix of an object moving across the display gradually changes from 100/0 at the far left side (L), to 50/50 in the middle (L=R) and to 0/100 at the far right side (R), which is equivalent to the equations presented. As for dependent claim 4, Shimizu and Neymotin teach the method as described in claim 3 and Shimizu further teaches: wherein the step of defining a longitudinal calibration basis of each of the game scene units as a starting point, and calculating a distance vector between the display interface and each of the starting points further comprises: calculating a first direction and a second direction of the longitudinal reference basis and the two adjacent longitudinal calibration bases respectively [(e.g. see Shimizu col 4 lines 13-19, col 5 lines 14-27, col 10 line 52 – col 11 line 3 and Fig. 2) ”The coordinate data fed to the audio processing unit 12 also includes … Y coordinate data representing the longitudinal/vertical direction of the display screen … the audio processing unit 12 finds the distance between the sound generating object and the virtual camera or the hero character on the basis of the first and second coordinate data, and determines the sound volume on the basis of the distance … determines the change (i.e., an angle) in direction of the sound generating object as viewed from the camera (or the hero character) on the basis of coordinate data respectively representing the position of the camera (or the hero character) and the positions of the sound generating object before and after … the coordinate data storage area 15c stores coordinate data of a sound generating object or the like displayed on a screen. For example, the coordinate data storage area 15c sorts coordinate data of an object 1 generating sounds such as an enemy character or a waterfall as coordinate data of the object 1. The coordinate data storage area 15c stores coordinate data of an object 2 such as a virtual camera (or the hero character) whose line of sight moves to see the object 1 by an operator operating the controllers 30 as coordinate data of the object 2. When sounds are generated from the object 1, the M-CPU 51 calculates a direction to the object 1 as viewed from the object 2 and the distance therebetween on the basis of the coordinate data of the object 1 and the coordinate data of the object 2. Further, a program for producing three-dimensional sound effects from the characteristic views of FIGS. 3 to 6 is executed on the basis of the direction and the distance to generate … the sound volume”]. the step that a volume of each game audio is adjusted respectively based on the volume control information comprises: obtaining, by the control module, a current volume of the intelligent terminal, and changing the volume of each game audio on different channels on different channels based on the following formulas: a first volume control information = (1-first ratio)*100%*current volume and a second volume control information = (1-second ratio)*100%*current volume [(e.g. see Shimizu col 7 lines 7-10, 47-48 and Fig. 2) ”the sound volumes of the left and right audio signals may be similarly controlled … Such relationships for controlling distance/direction and/or the sound volume may be predetermined, e.g., stored in a look-up table, or embodied as one or more equations stored in real time … the sound volume of the left audio signal is set to a maximum amount and the sound volume of the right audio signal is set to zero when the sound generating object exists on the left side at an angle of 0.degree. as viewed from the virtual camera (or the hero character) (see FIG. 3). As the sound generating object moves to the right drawing a semicircle of radius "r" around the virtual camera (or the hero character) as shown in FIG. 2, the sound volume of the right audio signal is gradually increased and the sound volume of the left audio signal is gradually decreased, as indicated by the characteristic view of FIG. 3. When the sound generating object reaches the front of the virtual camera (or the hero character) at position at an angle of 90.degree. from the left side, the sound volumes of the left and right audio signals are made equal. Further, when the sound generating object moves right to reach a position on the right side of the virtual camera (or the hero character) at an angle of 180.degree. from the left side, the sound volume of the left audio signal is set to zero, and the sound volume of the right audio signal is set to the maximum amount”]. Examiner notes that, as described and depicted, the left/right audio volume mix of an object moving across the display gradually changes from 100/0 at the far left side (L), to 50/50 in the middle (L=R) and to 0/100 at the far right side (R), which is equivalent to the equations presented. As for dependent claim 6, Shimizu and Neymotin teach the method as described in claim 1 and Shimizu further teaches: wherein the method further comprises the following steps: obtaining game objects within any game scene unit, and an operation instruction group for operating the game objects, wherein the operation instruction group comprises at least one operation instruction; and selecting any operation instruction in the operation instruction group, and applying the operation instruction to the game object [(e.g. see Shimizu col 3 lines 47-57, col 10 lines 56-61) ”the visual viewpoint/line of sight of the virtual camera is moved by the progress of the game, the operation of a player, and the like. The images such as the first display object that the viewer sees on the display screen are in effect viewed through a virtual camera from a particular point of reference or perspective in the three-dimensional scene. When a hero character or other object (for example, a human being or an animal) moves, (e.g., movement of the hands and legs), the line of sight of the virtual camera may, in some cases, be moved in synchronization with the movement of the line of sight of the hero character … The coordinate data storage area 15c stores coordinate data of an object 2 such as a virtual camera (or the hero character) whose line of sight moves to see the object 1 by an operator operating the controllers 30 as coordinate data of the object 2”]. As for independent claim 8, Shimizu and Neymotin teach a system. Claim 8 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1. As for dependent claim 10, Shimizu and Neymotin teach a non-transitory computer-readable storage medium implementing the steps of claim 1. Claim 10 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Shimizu (US 5,754,660) in view Neymotin et al. (US 2016/0071546 A1), as applied to claim 1 above, and further in view of Takiguchi et al. (US 2016/0062629 A1). As for dependent claim 5, Shimizu and Neymotin teach the method as described in claim 1 and Shimizu further teaches: wherein the step of changing audio parameters of each of the game audios comprises: changing one or more of a volume, a frequency band, a phase or a reverberation of each of the game audios [(e.g. see Shimizu col 7 lines 47-48) ”the sound volumes of the left and right audio signals may be similarly controlled”]. Shimizu and Neymotin do not specifically teach the interaction method further comprises the following steps: setting a sliding threshold and an audio adjustment rate threshold within the game application program, and when a speed at which the display interface moves laterally is greater than the sliding threshold, changing, by the control module, audio parameters of each of the game audios based on the audio adjustment rate threshold. However, in the same field of invention, Takiguchi teaches: and the interaction method further comprises the following steps: setting a sliding threshold and an audio adjustment rate threshold within the game application program, and when a speed at which the display interface moves laterally is greater than the sliding threshold, changing, by the control module, audio parameters of each of the game audios based on the audio adjustment rate threshold [(e.g. see Takiguchi paragraph 0092) ”The sound output processing unit 23 of the game machine 1 outputs the sound effect through the speaker 15 when the home screen is scrolled to the right or left. In the present example, the sound output processing unit 23 outputs footstep sound of the character 121 as the sound effect. As described above, the speed of the animation of the character 121 varies corresponding to the scroll speed of the home screen. Thus, the sound output processing unit 23 adjusts the reproduction speed of the sound effect in accordance with the scroll speed of the home screen. Further, the sound output processing unit 23 may change the sound volume of the sound effect in accordance with the scrolling of the home screen. For example, the sound output processing unit 23 may gradually increase the sound volume of the sound effect. Then, the sound effect is outputted at a higher sound volume when the scrolling of the home screen is continued for a longer time”]. Therefore, considering the teachings of Shimizu, Neymotin and Takiguchi, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add the interaction method further comprises the following steps: setting a sliding threshold and an audio adjustment rate threshold within the game application program, and when a speed at which the display interface moves laterally is greater than the sliding threshold, changing, by the control module, audio parameters of each of the game audios based on the audio adjustment rate threshold, as taught by Takiguchi, to the teachings of Shimizu and Neymotin because it improves the aesthetic appearance and design variety of the displayed screen (e.g. see Takiguchi paragraph 0145). Claims 7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Shimizu (US 5,754,660) in view Neymotin et al. (US 2016/0071546 A1), as applied to claim 6 above, and further in view of Mawdesley et al. (US 2022/0266133 A1). As for dependent claim 7, Shimizu and Neymotin teach the method as described in claim 6, but do not specifically teach wherein the step of selecting any operation instruction in the operation instruction group and applying the operation instruction to the game object comprises: forming a line segment having a total length of 1 and each operation instruction interval of in based on remaining health points of the game objects and weights of the operation instructions, randomly selecting points on the line segment, and selecting a falling interval as a determined operation instruction interval, randomly selecting points on the line segment, and selecting a falling interval as a determined operation instruction interval, or based on weight ranking of the game objects, applying an operation instruction corresponding to the determined operation instruction interval to a game objected ranked first in the weight ranking. However, in the same field of invention or solving similar problems, Mawdesley teaches: wherein the step of selecting any operation instruction in the operation instruction group and applying the operation instruction to the game object comprises: forming a line segment having a total length of 1 and each operation instruction interval of in based on remaining health points of the game objects and weights of the operation instructions [(e.g. see Mawdesley 0113, 0128, 0132 and Fig. 17) ”Also shown is a health bar 1718 … health goes up in case of a win in a spin, and goes down when a player gets attacked; attack points are earned by killing other players. Any appropriate algorithm can be used, if desired, to ensure that health is lost at a slower rate typically at the beginning of the game, and speeds up towards the end … Defence The safest mode when 50% defence, 50% attack emphasis is on protection, (typically between 25-60% less on attack defence, 75-40% attack) Balance The balanced mode 25% defence, 75% attack (default) (typically between 5-45% defence, 95-55% attack) Attack The riskiest mode when 0% defence, 100% attack no points from a spin win (that is, a win does not are attributed to health, boost health in this mode) and everything goes into an attack”]. randomly selecting points on the line segment, and selecting a falling interval as a determined operation instruction interval [(e.g. see Mawdesley paragraphs 0020, 0129, 0133) ”There are various defence and attack modes the player can choose from in order to gain some control over the game. Adjusting the modes is optional, and without changing them, the player can still win the game with default settings. Some parts of the game, such as the random defence mode or random attack mode, may use random number generators (RNGs) to satisfy various compliance requirements and to ensure the level of randomness in the game … attack mode … Random … All available attack power is aimed at contenders. The target is used randomly … an action carried out by one player in respect of another player (or several other players) may diminish the first player's mana by a fixed amount, and reduce the other player's health by an amount dependent on the power, for example”]. and based on weight ranking of the game objects, applying an operation instruction corresponding to the determined operation instruction interval to a game objected ranked first in the weight ranking [(e.g. see Mawdesley paragraphs 0035, 0148) ”Said at least one other player is preferably selected in accordance with a predefined player selection rule, and preferably the player selection rule is selected by the player, preferably from at least one of: other players who have attacked the player, players with a most favourable attribute value, players with a least favourable attribute value, and randomized … the targeted players are attacked, and the attack points are deducted from the players' health values. If any players' health drops to zero or below, the player is ‘dead’, and can no longer take part in the game”]. Therefore, considering the teachings of Shimizu, Neymotin and Mawdesley, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add wherein the step of selecting any operation instruction in the operation instruction group and applying the operation instruction to the game object comprises: forming a line segment having a total length of 1 and each operation instruction interval of in based on remaining health points of the game objects and weights of the operation instructions, randomly selecting points on the line segment, and selecting a falling interval as a determined operation instruction interval, randomly selecting points on the line segment, and selecting a falling interval as a determined operation instruction interval, and based on weight ranking of the game objects, applying an operation instruction corresponding to the determined operation instruction interval to a game objected ranked first in the weight ranking, as taught by Mawdesley, to the teachings of Shimizu and Neymotin because it creates greater efficiency in the system by providing a seamless multi-player experience (e.g. see Mawdesley paragraph 0110). As for dependent claim 9, Shimizu and Neymotin teach the system as described in claim 8; further, claim 9 discloses substantially the same limitations as claims 6 and 7. Therefore, it is rejected with the same rational as claims 6 and 7. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. PGPub 2016/0317929 A1 issued to Wu et al. on 03 November 2016. The subject matter disclosed therein is pertinent to that of claims 1-10 (e.g. audio parameter effect when transitioning between scenes). U.S. PGPub 2018/0329461 A1 issued to Hernandez Santisteban et al. on 15 November 2018. The subject matter disclosed therein is pertinent to that of claims 1-10 (e.g. adjusting the audio balance between two displayed screens). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER J FIBBI whose telephone number is (571)-270-3358. The examiner can normally be reached Monday - Thursday (8am-6pm). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER J FIBBI/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Mar 11, 2024
Application Filed
Dec 17, 2025
Non-Final Rejection — §103, §112
Apr 06, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585866
AUTOMATED ENTRY OF EXTRACTED DATA AND VERIFICATION OF ACCURACY OF ENTERED DATA THROUGH A GRAPHICAL USER INTERFACE
2y 5m to grant Granted Mar 24, 2026
Patent 12561152
METHODS AND SYSTEMS FOR ADAPTIVE CONFIGURATION
2y 5m to grant Granted Feb 24, 2026
Patent 12535930
INTEROPERABILITY FOR TRANSLATING AND TRAVERSING 3D EXPERIENCES IN AN ACCESSIBILITY ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12535941
USER INTERFACE FOR MANAGING INPUT TECHNIQUES
2y 5m to grant Granted Jan 27, 2026
Patent 12519999
Location Based Playback System Control
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
53%
Grant Probability
90%
With Interview (+37.6%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 376 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month