Prosecution Insights
Last updated: April 19, 2026
Application No. 18/786,642

STORAGE MEDIUM, METHOD, AND INFORMATION PROCESSING APPARATUS

Non-Final OA §101§103
Filed
Jul 29, 2024
Examiner
THERKORN, ERICA GERALDINE
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Colopl Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
7 currently pending
Career history
7
Total Applications
across all art units

Statute-Specific Performance

§101
23.8%
-16.2% vs TC avg
§103
52.4%
+12.4% vs TC avg
§112
23.8%
-16.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 are rejected under 35 U.S.C. 101 because claims 1-7 recite a storage medium. The broadest reasonable interpretation of a claim drawn to a storage medium (also called computer readable medium, machine readable medium, and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter. The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 U.S.C. 101 as covering both non-statutory subject matter and statutory subject matter. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. 101 by adding the limitation "non-transitory" to the claim. Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. As an additional note, a non-transitory computer readable medium having executable programming instructions stored thereon is considered statutory as non-transitory computer readable media excludes transitory data signals. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5, 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Dyke et al. (JP 2005111007 A; hereinafter Dyke) in view of Kim et al. (KR 20000050246 A; hereinafter Kim). Regarding claim 1, Dyke teaches a storage medium having stored thereon a program which, when executed by a computer including a processor and a memory, causes the processor to perform operations ("The game apparatus 3 selectively uses an optical disk 4 that is an example of an information storage medium by attaching and detaching it," (page 5, para 1). "In FIG. 2, the game apparatus 3 includes, for example, a risk (RISC) CPU (Central Processing Unit) 31 and executes various programs. The CPU 31 executes a startup program stored in a boot ROM (not shown), and initializes a memory such as the work memory 32. Thereafter, the CPU 31 executes a game program stored on the optical disc 4 and performs a game process corresponding to the game program. A work memory 32, a video RAM (VRAM) 33, an external memory interface (I / F) 34, a controller I / F 35, a GPU (Graphics Processing Unit) 36, and an optical disk drive 37 are connected to the CPU 31 via a predetermined bus," (page 5, para 4). The disclosed optical disc reads on the storage medium.) comprising: generating a virtual space ("The game space construction program GAP defines a process for constructing a game space by arranging each object in accordance with an operation signal input from the controller 6 and a game situation progressed by each program (S3, S4)," (page 6, para 3). Constructing a game space reads on generating a virtual space.); placing a virtual camera and a user object in the virtual space ("...Next, the CPU 31 moves the coordinates of the center position and viewpoint (virtual camera) of the player object in the world coordinate system in accordance with the currently set movement amount in one frame unit and its direction (step S33), and performs the processing. Proceed to the next step. Here, the viewpoint for generating the game image is set so that the player character is always included in the field of view. For example, the positional relationship between the viewpoints is fixedly arranged behind the player character so that the player character is arranged near the center of the game screen. Next, the CPU 31 arranges all the objects including the player object in the world coordinate system representing the game space (step S34), and advances the processing to the next step. In step S34, the respective local coordinates are converted into world coordinates based on the center position coordinates of the respective objects calculated with respect to the world coordinate system, whereby the respective objects are arranged in the world coordinate system," (page 8, para 1-4; Figures 8-13). The disclosed viewpoint reads on virtual camera. The player object reads on user object. The world coordinate system represents the game space and thus reads on virtual space.); generating, based on the virtual camera, a virtual space image obtained by capturing an inside of the virtual space from the virtual camera ("Next, the CPU 31 converts the game space shown in the world coordinate system in which all objects are arranged into the camera coordinates (view coordinates) of the viewpoint center and performs perspective projection conversion (step S35), and according to the subroutine. The process ends. For example, as shown in FIG. 8, the game image GI generated through the processes of steps S <b> 31 to S <b> 35 includes a player character P, an enemy character E, and a building object B. Since the virtual camera is arranged behind the player object and the player object is located near the center of the game screen, the back surface of the player character P is represented in the game image GI. Since the player character P is moving in the game space at a speed exceeding the predetermined value, the game image GI is expressed as moving at a speed exceeding the predetermined value in the X direction shown in the figure. As shown in FIG. 8, since the positional relationship between the player character P, the enemy character E, and the building object B is clear on the game image GI, the player can easily perform an operation of moving the player character P," (page 8, para 5; Figures 8 - 13). The game image reads on a virtual space image. Converting the game space into the camera coordinates of the viewpoint center and performing perspective projection conversion reads on generating based on the virtual camera. The game space (reads on virtual space) is captured in the game image since there is a clear correlation between the game space and the generated game image.); and moving, based on a movement operation for moving the user object being performed, the user object in the virtual space, wherein, in the generating the virtual space image, when the movement operation is not performed the user object is set as not represented and when the movement operation is performed the user object is set as opaque ("However, since the player object has been deleted in step S44, the player character P is not represented. At this time, the player character P exists in the game space in a state of moving or stopped at a low speed not exceeding the predetermined value. Note that a tool such as a muzzle for shooting may be displayed on the game image GI as part of the player character P," (page 9, para 3; Fig. 9). When the player character is in a stopped state, there is no movement operation being performed. When the player object (reads on user object) is deleted it is effectively transparent since it is not visually represented, as is clear in Fig. 9. "For example, as shown in FIG. 8... Since the player character P is moving in the game space at a speed exceeding the predetermined value, the game image GI is expressed as moving at a speed exceeding the predetermined value in the X direction shown in the figure. As shown in FIG. 8, since the positional relationship between the player character P, the enemy character E, and the building object B is clear on the game image GI, the player can easily perform an operation of moving the player character P," (page 8, para 5; Fig. 8). The player character corresponds to the player object (reads on user object). In Figure 8, it is clear that the player character is opaque. It is disclosed that Figure 8 represents a scenario where the player character is moving. The player can perform an operation of moving the player character, which reads on a movement operation for moving the user object.). Dyke fails to explicitly teach achieving non-representation by setting an object as transparent. Kim teaches achieving non-representation by setting an object as transparent (“Then, the color value is given only to the consumable character to be displayed on the current screen, and the remaining consumable character is made transparent by removing the color value. Therefore, only consumable characters with color values appear in the eyes of the user. If you want to change this, adjust the transparency to determine whether the color value exists. If the user turns on / off the button corresponding to the consumable character to be displayed, the transparency of this consumable character is set to 0%. It is displayed on the screen, and the consumable character that was displayed before is changed to 100% transparency so that it does not appear on the screen,” (page 4, para 6-8)). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Kim to Dyke. The motivation would have been to improve visibility and reduce occlusion. Regarding claims 8 and 9, they are rejected using the same citations and rationales described in the rejection of claim 1. Regarding claim 5, Dyke in view of Kim teaches the storage medium according to Claim 1, wherein a distance between the virtual camera and the user object is kept constant before and after of a start of the movement operation and before and after an end of the movement operation (Dyke; "Next, the CPU 31 moves the coordinates of the center position and viewpoint (virtual camera) of the player object in the world coordinate system in accordance with the currently set movement amount in one frame unit and its direction (step S33), and performs the processing. Proceed to the next step. Here, the viewpoint for generating the game image is set so that the player character is always included in the field of view. For example, the positional relationship between the viewpoints is fixedly arranged behind the player character so that the player character is arranged near the center of the game screen," (page 8, para 3). The fixed positional relationship reads on a distance between the virtual camera and the user object is kept constant. "Further, as apparent from a comparison between FIG. 8 and FIG. 9, each game image GI has the same base point from which the player character fires a shooting bullet. This is because the difference between the two is only whether or not the player character P is displayed, and the position coordinates of the player object and the viewpoint set in each game space are common. Therefore, even when the player switches from the game image GI shown in FIG. 8 to the game image GI shown in FIG. 9 (that is, the movement of the player character P is decelerated below the prescribed value in the state shown in FIG. 8), It can be performed. Further, the switching of the game image GI described above does not change the viewpoint with respect to objects other than the player character P. Therefore, it is possible to give the player a feeling as if the player character P suddenly disappeared." (Dyke; page 9, para 5; Fig. 8; Fig. 9). The camera distance remains constant even when switching display modes. The switching between display modes is based on the movement state. At least one embodiment of this reference teaches that the distance between the player character and the viewpoint is fixed, and remains constant across display modes. In that embodiment, the distance between the player character and the viewpoint remains constant before and after of a start of the movement operation and before and after an end of the movement operation.). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Dyke in view of Kim in further view of Ando et al. (WO 2005030355 A1; hereinafter Ando). Regarding claim 2, Dyke in view of Kim teaches the storage medium according to Claim 1, wherein, in the generating the virtual space image, when changing the user object from opaque to transparent, a level of transparency of the user object is gradually raised (Dyke; “Further, when the player character image is represented by an object whose transmission amount gradually increases as the moving speed decreases, it is possible to give the player a feeling that the player character gradually disappears,” (page 4, para 2). "Further, as the speed at which the player character P moves through the game space, an image showing the player character P may be displayed on the game image GI in a translucent state in which the transmission amount gradually increases. Also in this case, the game image GI is displayed with the player character P finally deleted as shown in FIG," (Dyke; page 10, para 2). The transmission amount gradually increasing reads on gradually raising the transparency.), and when changing the user object from transparent to opaque, the level of transparency of the user object is Dyke; "For example, as shown in FIG. 11, the player character Pt is displayed on the game image GI with a semi-transparent object, so that the front of the player character Pt can be easily viewed and the game space can be moved at high speed. it can. At this time, according to the acceleration of the speed at which the player character P moves in the game space, the image indicating the player character P may be changed so as to be displayed darker (so that the transmission amount decreases)," (page 10, para 2). The transmission amount decreases reads on lowering the transparency.). Dyke in view of Kim does not explicitly teach that the level of transparency is changed gradually when making the character more opaque. However, Ando teaches that the level of transparency is changed gradually when making the character more opaque (Ando; “In step 262, the transparency of the next runner character is set according to the change time information, and the process proceeds to step 264. In other words, immediately after the replacement time is received, the transparency of the next runner character is almost 100%, and as the current runner character approaches the reference point, the transparency gradually decreases, and the replacement time (when the current runner character reaches the reference point) In this case, the transparency is set to 0% (see Fig. 7). In step 264, it is determined whether or not the reference point has been reached. If a negative determination is made, the process returns to step 258. If the determination in step 264 is affirmative, this routine is executed. However, at the start of the next processing of the routine, an affirmative determination is made at step 230.In this second embodiment, as the current runner character approaches the reference point, the transparency is gradually increased. The so-called morphing process in which the current runner character gradually transforms into the next runner character is increased, although the next runner character's transparency is gradually lowered (gradually darker),” (page 10, para 2-3)). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Ando to Dyke in view of Kim. The motivation would have been to improve the smoothness of transitions and to improve the user experience. Claims 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Dyke in view of Kim in further view of Ronan Boulic et al., "Integration of motion control techniques for virtual human and avatar real-time animation", 01 September 1997, Association for Computing Machinery, Proceedings of the ACM Symposium on Virtual Reality Software and Technology, 111-118; hereinafter Boulic. Regarding claim 3, Dyke in view of Kim does not teach but Boulic teaches the storage medium according to Claim 1, wherein the program further causes the processor to perform operations comprising: causing the user object to perform a collateral action accompanying a movement or a stop of the user object during at least one of a first period after a start of the movement operation or a second period after an end of the movement operation, wherein the collateral action includes at least one of a preparation action while the user object transitions from a stopped state to a moving state or a lingering action while the user object transitions from the moving state to the stopped state (Boulic; "In the present paper we use the AGENT term to represent both a Virtual Human and an Avatar. We call ‘Avatar’ a Virtual Human whose posture is controlled by the EU with a motion capture device [MBT96]," (page 112, section 3 The Agent Entity, para 2). EU stands for End user. Agent reads on user object. "The ease-in and ease-out technique, based on cubic step functions, is applied to smoothen the motion transition. These two transitions are managed while the action is in the active status. The ease-in transition is also called the initiating phase while the ease-out transition is also called the terminating phase (Fig. 5). An action is responsible for reporting when it is “completed”, i.e. when it reaches its final state and remains in the corresponding final posture. This information can be important for the AGENT to compute the transitions depending on the action transition constraints explained below," (Boulic; pages 112- 113, section 3.2 Action Activation, para 4-5; page 113, Figure 5 and Figure 6). From Figure 5, it is clear that the initiating phase occurs after the onset of the action activity, this reads on a first period after a start of the movement operation. From figure 6, it is clear that during the initiating phase includes “posture blending on the initial posture.” The transition that occurs during the initiating phase reads on a collateral action. "Besides, in a sitting action it is important to begin from a prescribed standing posture for balance purpose; so the sitting- down action initiating type specifies a blending with an initial posture instead of the initial motion (Figure 6b, left side)," (Boulic; page 113, section 3.3 Action Transition Constraints, para 1-3). From figures 5 and 6 the initiating phase occurs before executing an action such as the disclosed sitting action performed by an agent, this reads on accompanying a movement of the user object. An action such as the sitting down action reads on a movement operation. Standing reads on a stopped state, and sitting down reads on a moving state, and blending with an initial posture reads on a preparation action.). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Boulic to Dyke in view of Kim. The motivation would have been to improve the smoothness of transitions and to improve the user experience. Regarding claim 4, Dyke in view of Kim does not teach but Boulic teaches the storage medium according to Claim 3, wherein the collateral action includes a change in a posture of the user object (Boulic; "The ease-in and ease-out technique, based on cubic step functions, is applied to smoothen the motion transition. These two transitions are managed while the action is in the active status. The ease-in transition is also called the initiating phase while the ease-out transition is also called the terminating phase (Fig. 5). An action is responsible for reporting when it is “completed”, i.e. when it reaches its final state and remains in the corresponding final posture. This information can be important for the AGENT to compute the transitions depending on the action transition constraints explained below," (pages 112- 113, section 3.2 Action Activation, para 4-5; page 113, Figure 5 and Figure 6). "Besides, in a sitting action it is important to begin from a prescribed standing posture for balance purpose; so the sitting- down action initiating type specifies a blending with an initial posture instead of the initial motion (Figure 6b, left side)," (Boulic; page 113, section 3.3 Action Transition Constraints, para 1-3). The transition that occurs during the initiating phase reads on a collateral action. The disclosed posture blending is included in the transition that occurs during the initiating reads on a change in posture of the user object.). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Boulic to Dyke in view of Kim. The motivation would have been to improve the smoothness of transitions and to improve the user experience. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Dyke in view of Kim in further view of Nick Pruehs, Six ingredients for a dynamic third person camera, 27 April 2018, pages 1-17; hereinafter Pruehs. Regarding claim 6, Dyke in view of Kim fails to teach but Pruehs teaches the storage medium according to Claim 1, wherein when the user object transitions from a stopped state to a moving state, the user object is controlled so that the user object is moved away from the virtual camera, and when the user object transitions from the moving state to the stopped state, the user object is controlled so that the user object is moved closer to the virtual camera (Pruehs; “We could just use camera modifiers for changing the distance between the character and the camera. However, in that case, we would lose the advantage of the engine built-in USpringArmComponent, which ensures a specific distance from camera to player character unless the line of sight between both is occluded by an obstacle. In that case, the spring arm will automatically pull the camera closer to the character in order to keep line of sight. The default camera distance can be changed through the Socket Offset of the USpringArmComponent," (pages 4-5). "The spring arm component in turn also provides an option for enabling camera lag. You can enable this feature by enabling Camera Lag and Camera Rotation Lag and specifying the respective speeds in order to create what is sometimes called a rubber band camera: Each time the character starts moving in any direction after having stood still for a short while, the camera won’t follow immediately but wait a short moment to provide a less abrupt game experience. Consequently, it will also take a moment to stop moving after the player character has stopped moving," (Pruehs; page 17). Pruehs teaches following the character with the camera. The character starts moving in any direction after having stood still for a short while reads on the user object transitions from a stopped state to a moving state. The camera waits a short moment before following the character, which creates more distance between the camera and character. Motion is relative, and thus creating more distance also moves the character away from the from the camera. In order to create this distance both the camera and the character must be controlled. The camera takes a moment to stop moving after the player character has stopped moving reads on the user object transitions from the moving state to the stopped state. The camera waits a moment before stopping after the player character has stopped moving, which reduces the distance between the player character and the camera. Motion is relative, and thus reducing the distance also moves the user object closer to the virtual camera. In order to reduce this distance both the camera and the character must be controlled.). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Pruehs to Dyke in view of Kim. The motivation would have been to improve a user’s visual experience and reduce visual discomfort. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Dyke in view of Kim in further view of Pruehs in further view of Eguchi et al. (JP 2019083964 A; hereinafter Eguchi). Regarding 7, Dyke in view of Kim fails to teach but Pruehs teaches the storage medium according to Claim 1, wherein when the user object transitions from a moving state to a stopped state, the user object is controlled so that the virtual camera follows the user object and stops moving relative to the user object Pruehs; “We could just use camera modifiers for changing the distance between the character and the camera. However, in that case, we would lose the advantage of the engine built-in USpringArmComponent, which ensures a specific distance from camera to player character unless the line of sight between both is occluded by an obstacle. In that case, the spring arm will automatically pull the camera closer to the character in order to keep line of sight. The default camera distance can be changed through the Socket Offset of the USpringArmComponent," (pages 4-5). "The spring arm component in turn also provides an option for enabling camera lag. You can enable this feature by enabling Camera Lag and Camera Rotation Lag and specifying the respective speeds in order to create what is sometimes called a rubber band camera: Each time the character starts moving in any direction after having stood still for a short while, the camera won’t follow immediately but wait a short moment to provide a less abrupt game experience. Consequently, it will also take a moment to stop moving after the player character has stopped moving," (Pruehs; page 17). Pruehs teaches following the character with the camera. The camera takes a moment to stop moving after the player character has stopped moving reads on the user object transitions from the moving state to the stopped state. The camera waits a moment before stopping after the player character has stopped moving, which reduces the distance between the player character and the camera. Motion is relative, and thus reducing the distance also moves the user object closer to the virtual camera. In order to reduce this distance both the camera and the character must be controlled.). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Pruehs to Dyke in view of Kim. The motivation would have been to improve a user’s spatial awareness and navigation. Dyke in view of Kim in further view of Pruehs does not explicitly disclose an orientation of the user object and a viewing direction of the virtual camera are aligned when the virtual camera stops moving relative to the user object. Eguchi teaches an orientation of the user object and a viewing direction of the virtual camera are aligned when the virtual camera stops moving relative to the user object (Eguchi; "Further, in the above embodiment, before setting of the reference point R (before touch operation), when the direction of the player character P1 does not match the shooting direction of the virtual camera, the game screen display means 31 captures the virtual camera An example of moving the virtual camera so as to align the direction with the direction of the player character P1 is illustrated. Instead of this, the operation target control means 34 starts the movement (forward movement) of the player character P1 after rotating the player character P1 so that the direction of the player character P1 is aligned with the shooting direction of the virtual camera," (page 10, para 2). The disclosed direction of the player character reads on an orientation of the user object. The shooting direction of the virtual camera reads on the viewing direction of the virtual camera.). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Eguchi to Dyke in view of Kim in further view of Pruehs. The motivation would have been to improve user orientation and control. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERICA G THERKORN whose telephone number is (571)272-2939. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERICA G THERKORN/Examiner, Art Unit 2618 /DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Jul 29, 2024
Application Filed
Mar 03, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month