Prosecution Insights
Last updated: April 19, 2026
Application No. 18/715,500

ANIMATION EFFECT GENERATION METHOD AND APPARATUS, AND MEDIUM AND DEVICE

Non-Final OA §101§103
Filed
May 31, 2024
Examiner
FOSTER, THOMAS JOHN
Art Unit
2616
Tech Center
2600 — Communications
Assignee
BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
95%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 95% — above average
95%
Career Allow Rate
19 granted / 20 resolved
+33.0% vs TC avg
Moderate +7% lift
Without
With
+7.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
0.8%
-39.2% vs TC avg
§103
72.7%
+32.7% vs TC avg
§102
22.7%
-17.3% vs TC avg
§112
2.3%
-37.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: “GUI Element Animation Effect Generation Method and Apparatus, and Medium and Device”. Allowable Subject Matter Claim 4 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 35 U.S.C 112 (f) Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: the “determination module” and “generation module” in claim 16. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Regarding to claim 17, the claimed invention is not directed to one of the four subject matter categories, i.e. process, machine, manufacture and composition of matter; a computer readable storage medium may be a carrier wave, a signal per se, and thus non-statutory (MPEP 2106 (I)) or (MPEP 2106 Patent Subject Matter Eligibility (I)). Claim 17 recites a “a computer readable storage medium” which may encompass transitory media such as, carrier waves. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2 and 5-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lim (Pub No. US 20120266109 A1) in view of Roard (Pub No. US 20190346985 A1). As per claim 1, Lim teaches the claimed: 1. An animation effect generation method, comprising: in response to determining that an inertial movement of a target element (Lim [0025]: “In some embodiments, movements in a UI are based at least in part on user input (e.g., gestures on a touchscreen) and an inertia model. For example, a movement can be extended beyond the actual size of a gesture on a touchscreen by applying inertia to the movement. Applying inertia to a movement typically involves performing one more calculations using gesture information (e.g., a gesture start position, a gesture end position, gesture velocity and/or other information) and one or more inertia motion values (e.g., friction coefficients) to simulate inertia motion. Simulated inertia motion can be used in combination with other effects (e.g., boundary effects) to provide feedback to a user.” The UI elements are described in relation to a viewport. Lim [0036]: “Although the example shown in FIG. 1 is described in terms of movement of a UI element relative to a viewport, any of the examples described herein also can be modeled in terms of movement of viewports relative to UI elements, or in some other way, depending on the desired frame of reference.”). after an end event of a two-dimensional touch movement involves a region outside a first elastic boundary of a display interface, (Lim [0052] “For example, the content in a UI element can be modeled in an animation as an elastic surface (e.g., a rectangular elastic surface) moving within a larger rectangular region that represents the extent of the scrollable region for the viewport. When a boundary is exceeded, the boundary can appear in the viewport and the elastic surface can be compressed. During compression of the elastic surface, the content can be compressed along the axis corresponding to a boundary that was exceeded.” The elastic surface is compressed when the boundary is reached. This functions as the elastic boundary.). determining collision information for occurrence of a collision rebound according to the first elastic boundary, (Lim [0058]: “Applying a compression effect along only one axis while maintaining a normal scale on the other axis can be useful, for example, to provide a greater visual effect where scaling along both axes (e.g., according to the same scale factor) would only make the content look smaller. If a compression effect is applied along only one axis, the UI element can react to a boundary on another axis with a "hard stop" in which motion along that axis ceases at the boundary, with a bounce effect, or with some other reaction. A determination of how compression is to be applied can be made dynamically (e.g., based on whether the diagonal movement is nearer to a vertical movement or a horizontal movement) or based on predetermined settings (e.g., user preferences).” The contact with or exceeding of the boundary and the bounce off is the collision with a rebound.). a motion state of the target element when the end event occurs, and an assumed end-point position, wherein the assumed end-point position is a stop position of the target element only moving inertially based on the motion state; (Lim teaches that movement of the viewport collide with the boundary and can go past the boundary, and that the physics engine determines movement back to the boundary. The assumed endpoint is the point relative to the elastic boundary where the object returned to alignment with it. When it has been passed, the physics engine applied factors other than just inertial motion. Lim [0082]: “If a boundary has been exceeded, that boundary is considered to be the "Target" boundary, and "Sign" indicates the direction to move in order to return to the boundary. (If a compression limit also has been reached, motion can be stopped as shown in the pseudocode 1100 in FIG. 11.) The physics engine calculates the distance (position_delta) to the Target boundary. The physics engine uses a spring model to calculate movement that returns the viewport to the Target boundary. The physics engine calculates the force (Fs) of the spring based on a spring factor constant (spring_factor), and uses the force of the spring and a damper factor (damper_factor) to calculate a new distance (new_delta) to the Target boundary. new_delta is non-negative because the spring model does not oscillate around the Target boundary. The physics engine calculates a change in velocity (delta_velocity) for the UI element using the force of the spring and the damper factor, and calculates a new velocity (new_component_velocity) for the component based on the original component velocity,” The viewport corresponds to the UI element being moved. The inertial movement is the movement leading up to collision with the boundary. When the boundary is exceeded, the physics engine with the factors like spring and damper factors contributes to the movement as well. This is the change in the motion state.). Lim alone does not explicitly teach the remaining claim limitations. However, Lim in combination with Roard teaches the claimed: and according to a preset target easing function and the collision information, generating a first animation from the motion state to the occurrence of the collision rebound, and a second animation from the occurrence of the collision rebound to a case where a first boundary of the target element is aligned with the first elastic boundary. (Lim teaches animating the different portions of the sequence of the GUI element colliding with the boundary. Lim [0061]: “From state 630, the system can transition to state 640 ("Animate") via state transition 632 ("Animate"). In state 640 ("Animate"), the system presents one or more animations (e.g., boundary effect animations, inertial motion animations). For example, the system can animate inertial motion caused by the flick gesture in the UI element in state 640. As another example, if the flick gesture causes the UI element to be moved beyond one or more boundaries, the system can animate one or more boundary effects in state 640. As another example, if a drag gesture causes the UI element to be moved beyond one or more boundaries, state transition 626 ("Out of Bounds") takes the system to state 640 if the system detects that the viewport is positioned beyond one or more boundaries of the UI element as a result of movement caused by the drag gesture.” Roard teaches movement of objects in an interface according to user induced movement. This includes return to a resting location, like that in a rebound. Roard teaches an easing function to determine the speed in this movement. Roard [0011]: “The actions further include determining that a distance of the user-induced path satisfies a threshold distance; and based on determining that the distance of the user-induced path satisfies a threshold distance, determining that the resting location is a location other than an original location of the user interface element. The action of determining an additional path and an additional speed for the user interface element to move along the additional path is further based on a maximum acceleration for the user interface element, a maximum speed of the user interface element, and a maximum time to move the user interface element along the additional path to the resting location. The additional path corresponds to a cubic spline that includes the resting location and a location of the user interface element when the user input ceased. The speed corresponds to a cubic-bezier easing function. The actions further include determining the speed of the user input at a time that the user input has ceased. The user-induced path is along a path that is fixed by the computing device.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the easing function controlling returning movement to a resting position as taught by Roard with the system of Lim in order to mathematically represent the movement of a GUI element to a boundary after a collision with a boundary. As per claims 16 and 18, these claims are similar in scope to limitations recited in claim 1, and thus is rejected under the same rationale. As per claim 16, Lim [0003] teaches an apparatus to perform the claimed, and Lim [0018] teach a processor. As per claim 18, animation of GUI elements must happen on an electronic device. Lim [0090] teaches that computer programming modules, which are the executable code are used for the techniques described. As per claim 2, Lim teaches the claimed: 2. The method according to claim 1, further comprising: in response to the end event being detected, determining a current position and a current velocity of the target element as a current motion state. (Lim [0025]: “Applying inertia to a movement typically involves performing one more calculations using gesture information (e.g., a gesture start position, a gesture end position, gesture velocity and/or other information) and one or more inertia motion values (e.g., friction coefficients) to simulate inertia motion. Simulated inertia motion can be used in combination with other effects (e.g., boundary effects) to provide feedback to a user.” The motion state determines the presents of inertia motion and possibly other factors depending on collision with the boundary, as described above.). As per claim 5, Lim teaches the claimed: 5. The method according to claim 1, wherein the motion state comprises a current position and a current velocity; and the determining that the inertial movement of the target element after the end event of the two-dimensional touch movement involves a region outside the first elastic boundary of the display interface, comprises: (Lim [0053]: “For compression effects, the length of time that boundaries appear in the viewport and that the elastic surface is compressed can vary depending on motion type. For example, during inertial motion of the elastic surface (e.g., motion following a flick gesture), one or more boundaries of the elastic surface can appear briefly in the viewport and then move back out of the viewport (e.g., in a non-linear fashion as dictated by equations in a physics engine). As another example, if a user performing a drag gesture maintains contact with the touchscreen, one or more boundaries of the elastic surface can appear in the viewport indefinitely with the elastic surface in a compressed state, until the user breaks contact with the touchscreen.” The user uses a drag gesture to move the GUI element. The touch movement interacts with the boundaries of the surface.). if the current position is outside the first elastic boundary, determining that the inertial movement involves a region outside the first elastic boundary; (Lim [0067]: “In this detailed example, at the start of a gesture (e.g., a drag gesture or a flick gesture), the physics engine can take current parameters as input and perform calculations that can be rendered as movement in the UI. Parameters used by the physics engine in this detailed example include size parameters, position parameters, and velocity parameters. Size parameters include the size (e.g., in the horizontal (x) and vertical (y) dimensions) of the viewport. Position parameters (e.g., in the horizontal (x) and vertical (y) dimensions) include position_current, position_min, position_max. In this detailed example, position_current is a viewport value that represents a point (e.g., a midpoint) between the edges of the viewport. position_min and position_max are boundary values that represent boundaries in a UI element which, if exceeded by position_current, can cause the system to present boundary effects to indicate that a boundary in a UI element has been exceeded. As used herein, the term "exceed" is used to describe a value that is outside a range defined by boundaries. For example, a viewport value can be considered to exceed a boundary if the viewport value is less than position_min or greater than position_max”. This also determines whether it is within the boundary. The inertia motion still applies to motion outside the boundary, but is combined with other factors.). or if the current position is within the first elastic boundary, and if determining that the inertial movement needs to be performed based on the current velocity, determining the assumed end-point position; (Lim [0025]: “In some embodiments, movements in a UI are based at least in part on user input (e.g., gestures on a touchscreen) and an inertia model. For example, a movement can be extended beyond the actual size of a gesture on a touchscreen by applying inertia to the movement. Applying inertia to a movement typically involves performing one more calculations using gesture information (e.g., a gesture start position, a gesture end position, gesture velocity and/or other information) and one or more inertia motion values (e.g., friction coefficients) to simulate inertia motion. Simulated inertia motion can be used in combination with other effects (e.g., boundary effects) to provide feedback to a user.”). and determining that the inertial movement involves a region outside the first elastic boundary in a case that the assumed end-point position is outside the first elastic boundary. (Lim [0063]: “From state 640, state transition 642 ("In Bounds and Not Moving") takes the system back to state 610 ("Idle") after animations (e.g., inertial motion animations, boundary effect animations) have completed. In the example shown in FIG. 6, state transition 642 indicates that the system leaves state 640 when the viewport is within the boundaries of the UI element, and the UI element has stopped moving. Alternatively, the system can leave state 640 when some other set of conditions is present. For example, a system could enter an idle state where the viewport remains outside one or more boundaries of the UI element. As another alternative, other states and state transitions (e.g., states or state transitions corresponding to other kinds of user input or events) can be used in the system.” This is a situation where the position is outside the boundary and that is the end position where the element will remain.). As per claim 6, Lim teaches the claimed: 6. The method according to claim 5, wherein the determining that the inertial movement needs to be performed based on the current velocity, comprises: if the current velocity is greater than a preset velocity threshold, determining that the inertial movement needs to be performed. (Lim [0043]: “For example, simulated inertia motion can be applied when a gesture has a velocity above a threshold velocity. The new position can be further based on the simulated inertia motion. At 330, the system determines that the new position for the viewport exceeds one or more of the boundaries. At 340, the system calculates one or more multi-dimensional boundary effects based at least in part on the new position of the viewport. The multi-dimensional boundary effects comprise a compression effect. For example, the system can determine an extent by which a boundary has been exceeded, determine a region of the UI element to be compressed, and determine a scale factor for the compression effect based on the size of the region to be compressed and the extent by which the boundary has been exceeded.”). As per claim 7, Lim teaches the claimed: 7. The method according to claim 1, wherein the assumed end-point position is determined based on the first elastic boundary, the motion state, a preset deceleration/acceleration, and a preset over-boundary damping coefficient. (Lim [0082]: “If a boundary has been exceeded, that boundary is considered to be the "Target" boundary, and "Sign" indicates the direction to move in order to return to the boundary. (If a compression limit also has been reached, motion can be stopped as shown in the pseudocode 1100 in FIG. 11.) The physics engine calculates the distance (position_delta) to the Target boundary. The physics engine uses a spring model to calculate movement that returns the viewport to the Target boundary. The physics engine calculates the force (Fs) of the spring based on a spring factor constant (spring_factor), and uses the force of the spring and a damper factor (damper_factor) to calculate a new distance (new_delta) to the Target boundary. new_delta is non-negative because the spring model does not oscillate around the Target boundary. The physics engine calculates a change in velocity (delta_velocity) for the UI element using the force of the spring and the damper factor, and calculates a new velocity (new_component_velocity) for the component based on the original component velocity, the change in velocity due to the spring, and drag_coefficient. The physics engine calculates a new position for the component (new_component_position) relative to the Target boundary, based on new_delta and new_component_velocity. If new_component_position is at the Target boundary, or if new_component_position is now on the other side of the Target boundary, motion stops and the component position is set to be the same value as the Target boundary.” It would be obvious to preset thresholds and factors such as the damping factor, which is a coefficient.). As per claim 8, Lim teaches the claimed: 8. The method according to claim 1, wherein the determining the collision information for occurrence of the collision rebound comprises: determining a collision position where the collision rebound occurs according to the first elastic boundary, a current position in the motion state, and the assumed end-point position; (Lim teaches determining collision position. Lim [0031]: “The diagonal drag gesture 104 causes multi-dimensional boundary effects in state 192. For example, the diagonal drag gesture 104 causes a compression effect shown in state 192…. In the example shown in state 192, the compression effect indicates that a left boundary 112 and top boundary 114 of the web page 110 have been exceeded. The web page 110 also includes a right boundary 116 and a bottom boundary 118, which have not been exceeded in state 192. A boundary can be deemed exceeded or not exceeded based on, for example, whether a viewport position value (e.g., an x-coordinate value or a y-coordinate value) is outside a range defined by boundaries of the web page 110 (e.g., an x-coordinate range defined by left boundary 112 and right boundary 116, or a y-coordinate range defined by top boundary 114 and bottom boundary 118).” The position to which the viewport will return to alignment with the boundary is the assumed end point. The motion state is the change in the physics of the viewport when it exceeds the boundary. The motion state is taught by Lim [0082] where the viewport has exceeded the boundary and the spring effect is applied to it.). and determining a collision time and a collision key frame based on the collision position, to determine the collision information. (Lim teaches evaluating the UI frame-by-frame. It can identify when a boundary has been exceeded. This is the collision. Thus, Lim can identify that as the keyframe. The physics engine them determines compression information based on when the element is out of bounds. This is the collision information. Lim [0062]: “Animation of movement can be performed, for example, at a system frame rate (e.g., 60 frames per second (fps)) or using an internal timer (e.g., at a minimum value such as 10 fps to ensure good performance and accurate calculations). A physics engine can perform stepped calculations at each frame. Whatever time step is used between frames, a new view of the UI can be drawn at each frame. The view that is drawn can depend on several factors, including whether one or more boundaries in a UI element have been exceeded. For example, if the viewport is out of bounds, a physics engine can determine compression effect information (e.g., a compression point and a compressed size for the content in the UI element being compressed), and the UI system can draw a view of the UI using the compression effect information. If the viewport is not out of bounds, the UI system can draw a view of the UI without a compression effect.”). As per claim 9, Lim teaches the claimed: 9. The method according to claim 8, wherein the determining the collision position where the collision rebound occurs according to the first elastic boundary, the current position in the motion state, and the assumed end-point position, comprises: (Each of these values related to the collision rebound is described above.). determining an initial collision position to enable a first distance from the first elastic boundary to the initial collision position in a first axis direction corresponding to the first elastic boundary to be equal to a second distance from the initial collision position to the assumed end-point position; (Lim teaches determining if, for each axis, whether the elastic boundary has been reached and exceeded. The first distance is the distance from the current position of the GUI element to where it will exceed the elastic boundary. That is when the distance equals the second distance from the initial collision to the assumed end point. The second distance is the distance past the boundary that the object can compressed before the object returns to align with the boundary. The assumed end point is when the boundary has been exceeded and the object will then be redirected to align with that boundary, based on the compression limit. Once the boundary has been exceeded, the motion state changes to determine the physics involved in the rebound effect. Lim [0082]: “…Also at each time step in the example shown in pseudocode 1200, for each axis, if a boundary has not been exceeded, a new velocity and a new position is calculated, taking into account a coefficient that slows the velocity (e.g., drag_coefficient, based on a model of fluid resistance). If a boundary has been exceeded, that boundary is considered to be the "Target" boundary, and "Sign" indicates the direction to move in order to return to the boundary. (If a compression limit also has been reached, motion can be stopped as shown in the pseudocode 1100 in FIG. 11.) The physics engine calculates the distance (position_delta) to the Target boundary. The physics engine uses a spring model to calculate movement that returns the viewport to the Target boundary. The physics engine calculates the force (Fs) of the spring based on a spring factor constant (spring_factor), and uses the force of the spring and a damper factor (damper_factor) to calculate a new distance (new_delta) to the Target boundary. new_delta is non-negative because the spring model does not oscillate around the Target boundary. The physics engine calculates a change in velocity (delta_velocity) for the UI element using the force of the spring and the damper factor, and calculates a new velocity (new_component_velocity) for the component based on the original component velocity, the change in velocity due to the spring, and drag_coefficient. The physics engine calculates a new position for the component (new_component_position) relative to the Target boundary, based on new_delta and new_component_velocity. If new_component_position is at the Target boundary, or if new_component_position is now on the other side of the Target boundary, motion stops and the component position is set to be the same value as the Target boundary.” The examiner is interpreting “distance” to be a scalar quantity that allows it to be a distance in either direction of an axis. A distance can be great enough in either the positive or negative direction of a dimension to have its respective effect.). if the second distance is between a first distance threshold and a second distance threshold, taking the initial collision position as the collision position;( This is the case the distance is such that the collision takes place but has not exceeded the boundary enough to be rebounded and have the physics engine applied. In this case, only the position of the object as it collides is taken as the position.). if the second distance is greater than the second distance threshold, determining the collision position based on the second distance threshold and the motion state; (This is when the position of the object has exceeded a boundary and has gone far enough to reach the compression limit. Then, the motion state determines the physics being applied to the GUI element is used to determine its position going back to align with the boundary.). and if the second distance is less than the first distance threshold, determining the collision position based on the current position and the assumed end-point position. (This is where the object is close enough to be returned to the assumed end-point that it is not affected by the rebound to return it to the boundary. However, the distance is less than the distance from the boundary where there is no longer a collision, as that is the first distance threshold. In this case, the position is determined based on the current position and the relation to the boundary.). As per claim 10, Lim alone does not explicitly teach the claimed limitations. However, Lim in combination with Roard teaches the claimed: 10. The method according to claim 8, wherein the determining the collision time and the collision key frame based on the collision position, to determine the collision information, comprises: determining the collision key frame based on the collision position; (Lim teaches analyzing motion frame-by-frame as taught above in Lim [0062] and determining the frame in which the collision happens.). and determining the collision time based on a first displacement and an easing functional equation corresponding to the target easing function, wherein the first displacement is a displacement from the current position to the collision position. (Roard [0014]:“The action of determining the second trajectory includes, if the peak velocity is greater than the maximum velocity, applying the initial acceleration at the maximum rate of acceleration from the initial velocity to the maximum velocity, maintaining the maximum velocity for an intermediate distance, and applying the deceleration at the maximum rate of acceleration from the peak velocity to zero. The intermediate distance is calculated such that the deceleration at the maximum rate of acceleration from the peak velocity to zero, will reach zero at the third position. The action of determining the second trajectory includes, if the initial velocity is in a direction away from the third position, applying a first deceleration at the maximum rate of acceleration from the initial velocity to zero, before applying the initial acceleration from zero. If a distance between the first position and the second position is below a threshold distance, the third position is determined to be located at the first position. Moving the user interface element along the first trajectory includes: calculating an anchor point based on the location of the user input; and moving one or more points of the user interface element along respective cubic splines, based on the anchor point, using a cubic-bezier easing function.” The GUI elements’ positions are calculated with the physical factors applied. This can also be in relation of an anchor point input by a user, such as the boundary taught by Lim. The easing function is used to determine the speed and acceleration based on time. It would be obvious to use the function to compute the time that the motion stops its original path, such as for a collision. The displacement from the original position to the collision position determines velocity and acceleration calculations, which affect the other physical calculations. Roard [0011]: “The actions further include determining that a distance of the user-induced path satisfies a threshold distance; and based on determining that the distance of the user-induced path satisfies a threshold distance, determining that the resting location is a location other than an original location of the user interface element. The action of determining an additional path and an additional speed for the user interface element to move along the additional path is further based on a maximum acceleration for the user interface element, a maximum speed of the user interface element, and a maximum time to move the user interface element along the additional path to the resting location. The additional path corresponds to a cubic spline that includes the resting location and a location of the user interface element when the user input ceased. The speed corresponds to a cubic-bezier easing function. The actions further include determining the speed of the user input at a time that the user input has ceased. The user-induced path is along a path that is fixed by the computing device.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the easing function model of interface element movement in relation to a certain point as taught by Roard with the system of Lim in order to model the movement and use it to compute the time of the collision with the boundary. As per claim 11, Lim teaches the claimed: 11. The method according to claim 1, wherein the collision information comprises a collision position, a collision time, and a collision key frame corresponding to the collision position; (Lim teaches breaking an animation of the sequence into frames. This can include the frame when one or more boundaries is exceeded. This is a collision key frame. Lim [0062]: “Animation of movement can be performed, for example, at a system frame rate (e.g., 60 frames per second (fps)) or using an internal timer (e.g., at a minimum value such as 10 fps to ensure good performance and accurate calculations). A physics engine can perform stepped calculations at each frame. Whatever time step is used between frames, a new view of the UI can be drawn at each frame. The view that is drawn can depend on several factors, including whether one or more boundaries in a UI element have been exceeded…” Lim teaches determining collision position. Lim [0031]: “The diagonal drag gesture 104 causes multi-dimensional boundary effects in state 192. For example, the diagonal drag gesture 104 causes a compression effect shown in state 192…. In the example shown in state 192, the compression effect indicates that a left boundary 112 and top boundary 114 of the web page 110 have been exceeded. The web page 110 also includes a right boundary 116 and a bottom boundary 118, which have not been exceeded in state 192. A boundary can be deemed exceeded or not exceeded based on, for example, whether a viewport position value (e.g., an x-coordinate value or a y-coordinate value) is outside a range defined by boundaries of the web page 110 (e.g., an x-coordinate range defined by left boundary 112 and right boundary 116, or a y-coordinate range defined by top boundary 114 and bottom boundary 118).”). and generating the first animation from the motion state to the occurrence of the collision rebound and a second animation from the occurrence of the collision rebound to the case where the first boundary of the target element is aligned with the first elastic boundary,comprises: determining a first easing function before the collision rebound occurs and a second easing function after the collision rebound occurs based on the collision time and the target easing function; (Roard claim 14: The method of claim 13, wherein moving the user interface element along the first trajectory comprises: calculating an anchor point based on the location of the user input; and moving one or more points of the user interface element along respective cubic splines, based on the anchor point, using a cubic-bezier easing function” The boundary can be the anchor point taught by Roard.). generating the first animation based on the first easing function, the collision time, the collision key frame, and a first displacement from the current position to the collision position; (Lim teaches animating the effect of the drag of the UI past a boundary. Lim [0061]:“From state 630, the system can transition to state 640 ("Animate") via state transition 632 ("Animate"). In state 640 ("Animate"), the system presents one or more animations (e.g., boundary effect animations, inertial motion animations). For example, the system can animate inertial motion caused by the flick gesture in the UI element in state 640. As another example, if the flick gesture causes the UI element to be moved beyond one or more boundaries, the system can animate one or more boundary effects in state 640. As another example, if a drag gesture causes the UI element to be moved beyond one or more boundaries, state transition 626 ("Out of Bounds") takes the system to state 640 if the system detects that the viewport is positioned beyond one or more boundaries of the UI element as a result of movement caused by the drag gesture.” The drag gesture that causes the UI element to be moved beyond the boundary is the first element. Lim [0031] teaches finding the coordinates of the boundaries and the viewport, as taught above. This can be used to find the displacement, which is the bases of the physical calculations like spring effects.). and generating the second animation based on the second easing function, the collision time, the collision key frame, and a second displacement from the collision position to a target position, wherein the target position is a position where the first boundary of the target element is aligned with the first elastic boundary. (Lim [0052]: “For example, the content in a UI element can be modeled in an animation as an elastic surface (e.g., a rectangular elastic surface) moving within a larger rectangular region that represents the extent of the scrollable region for the viewport. When a boundary is exceeded, the boundary can appear in the viewport and the elastic surface can be compressed. During compression of the elastic surface, the content can be compressed along the axis corresponding to a boundary that was exceeded.” This is the second animation. In this case, the collision position causes the surface to be moved passed the boundary and then the element is returned to alignment with the boundary.). As per claim 12, Lim teaches the claimed: 12. The method according to claim 11, wherein the collision information further comprises a start key frame when the end event occurs and an end key frame when the target position is reached;(Lim [0062]: “Animation of movement can be performed, for example, at a system frame rate (e.g., 60 frames per second (fps)) or using an internal timer (e.g., at a minimum value such as 10 fps to ensure good performance and accurate calculations). A physics engine can perform stepped calculations at each frame. Whatever time step is used between frames, a new view of the UI can be drawn at each frame. The view that is drawn can depend on several factors, including whether one or more boundaries in a UI element have been exceeded. For example, if the viewport is out of bounds, a physics engine can determine compression effect information (e.g., a compression point and a compressed size for the content in the UI element being compressed), and the UI system can draw a view of the UI using the compression effect information. If the viewport is not out of bounds, the UI system can draw a view of the UI without a compression effect.” Lim teaches drawing frames for each step, including the start keyframe when the boundary is exceeded, and the end key frame when the viewport is returned.). generating the first animation comprises generating the first animation based on the first easing function, the collision time, the start key frame, the collision key frame, (Lim teaches the first animation. This is where the UI element is moved beyond the boundary. Lim [0061]: “From state 630, the system can transition to state 640 ("Animate") via state transition 632 ("Animate"). In state 640 ("Animate"), the system presents one or more animations (e.g., boundary effect animations, inertial motion animations). For example, the system can animate inertial motion caused by the flick gesture in the UI element in state 640. As another example, if the flick gesture causes the UI element to be moved beyond one or more boundaries, the system can animate one or more boundary effects in state 640. As another example, if a drag gesture causes the UI element to be moved beyond one or more boundaries, state transition 626 ("Out of Bounds") takes the system to state 640 if the system detects that the viewport is positioned beyond one or more boundaries of the UI element as a result of movement caused by the drag gesture.” The drag gesture that causes the UI element to be moved beyond the boundary is the first event.). and respective components of the first displacement in a first axis direction and a second axis direction, wherein the first axis direction and the second axis direction are perpendicular to each other; (Lim teaches that the boundaries can be vertical and horizontal and the elements can be compressed along vertical and horizontal axes, which would be perpendicular. Lim [0027]: “In examples described herein, boundary effects can be used to provide visual cues to a user to indicate that a boundary (e.g., a horizontal boundary, a vertical boundary, or other boundary) in a UI element (e.g., a web page displayed in a browser) has been reached or exceeded. In described implementations, a UI system presents multi-dimensional boundary effects in a UI element (or a portion of a UI element) by causing the UI element to be displayed in a visually distorted state, such as a squeezed or compressed state (i.e., a state in which text, images or other content is shown to be smaller than normal in one or more dimensions), to indicate that one or more boundaries of the UI element have been exceeded. As used herein, "multi-dimensional boundary effect" refers to a boundary effect in a UI element that is capable of moving in more than one dimension. Multi-dimensional movement can be performed separately in different dimensions (e.g., horizontal scrolling followed by vertical scrolling) or in combination (e.g., diagonal movement). Multi-dimensional boundary effects need not include boundary effects presented for more than one boundary at the same time, although in some embodiments boundary effects can be presented for more than one boundary at the same time. For example, in some embodiments, diagonal movement that causes a vertical boundary and a horizontal boundary of a UI element to be exceeded can cause compression of content in the UI element along a horizontal axis and along a vertical axis at the same time.”). and generating the second animation comprises: determining a rebound time according to a total motion time and the collision time, wherein the total motion time is a total time for the target element to inertially move to the assumed end-point position based on the motion state; (Lim teaches animations for different parts of the sequence, including the return portion. This is the second animation. Lim [0061]: “From state 630, the system can transition to state 640 ("Animate") via state transition 632 ("Animate"). In state 640 ("Animate"), the system presents one or more animations (e.g., boundary effect animations, inertial motion animations). For example, the system can animate inertial motion caused by the flick gesture in the UI element in state 640. As another example, if the flick gesture causes the UI element to be moved beyond one or more boundaries, the system can animate one or more boundary effects in state 640. As another example, if a drag gesture causes the UI element to be moved beyond one or more boundaries, state transition 626 ("Out of Bounds") takes the system to state 640 if the system detects that the viewport is positioned beyond one or more boundaries of the UI element as a result of movement caused by the drag gesture.” The drag gesture that causes the UI element to be moved beyond the boundary is the first event.). and generating the second animation based on the second easing function, the rebound time, the collision key frame, the end key frame, and respective components of the second displacement in the first axis direction and the second axis direction. (Lim [0052]: “For example, the content in a UI element can be modeled in an animation as an elastic surface (e.g., a rectangular elastic surface) moving within a larger rectangular region that represents the extent of the scrollable region for the viewport. When a boundary is exceeded, the boundary can appear in the viewport and the elastic surface can be compressed. During compression of the elastic surface, the content can be compressed along the axis corresponding to a boundary that was exceeded.” This involved the viewport being returned to its boundary. Lim [0082]: “If a boundary has been exceeded, that boundary is considered to be the "Target" boundary, and "Sign" indicates the direction to move in order to return to the boundary. (If a compression limit also has been reached, motion can be stopped as shown in the pseudocode 1100 in FIG. 11.) The physics engine calculates the distance (position_delta) to the Target boundary. The physics engine uses a spring model to calculate movement that returns the viewport to the Target boundary.”). As per claim 13, Lim teaches the claimed: 13. The method according to claim 1, wherein the target element is an interactive element on a web page, and the display interface is a web page. (Lim [0027]: “In examples described herein, boundary effects can be used to provide visual cues to a user to indicate that a boundary (e.g., a horizontal boundary, a vertical boundary, or other boundary) in a UI element (e.g., a web page displayed in a browser) has been reached or exceeded ). As per claim 14, Lim teaches the claimed: 14. The method according to claim 13, further comprising: displaying the first animation and the second animation sequentially on the web page. (Lim teaches the different steps being animated on a web page. This includes the portions of the sequence in both the first and second animations. Lim claim 20: “20. A mobile computing device comprising one or more processors, a touchscreen device, and one or more computer readable storage media having stored therein computer-executable instructions for performing a method, the method comprising: receiving gesture information corresponding to a gesture on the touchscreen device, the gesture information indicating a movement in at least a horizontal dimension and a vertical dimension of content in a web page in a graphical user interface; and performing steps (a)-(i) for each of plural frames of a multi-dimensional boundary effect animation, steps (a)-(i) comprising: (a) based at least in part on the gesture information, calculating a new vertical position of a viewport in the graphical user interface; (b) based at least in part on the new vertical position, calculating an extent by which a vertical movement boundary associated with the web page has been exceeded; (c) calculating a vertical scale factor based at least in part on the extent by which the vertical movement boundary has been exceeded; (d) calculating a vertical compression effect in the web page based at least in part on the vertical scale factor; (e) based at least in part on the gesture information, calculating a new horizontal position of the viewport in the graphical user interface; (f) based at least in part on the new horizontal position, calculating an extent by which a horizontal movement boundary associated with the web page has been exceeded; (g) calculating a horizontal scale factor based at least in part on the extent by which the horizontal movement boundary has been exceeded; and (h) calculating a horizontal compression effect in the web page based at least in part on the horizontal scale factor; and (i) displaying the horizontal compression effect and the vertical compression effect in the web page on the touchscreen device.”). As per claim 15, Lim teaches the claimed: 15. The method according to claim 14, further comprising: during a process of displaying the first animation and the second animation sequentially on the web page, if a new touch event is detected, interrupting the animation, and displaying a last frame of the second animation at a next frame after interrupting the animation. (Lim claim 16: “A computer readable medium having stored thereon computer-executable instructions operable to cause a computer to perform a method comprising: receiving gesture information corresponding to a gesture on a touch input device, the gesture information indicating movement in at least a horizontal dimension and a vertical dimension; based at least in part on the gesture information, computing a new position of a viewport relative to a user interface element in a graphical user interface, the user interface element having a vertical movement boundary and a horizontal movement boundary; based at least in part on the new position, determining an extent by which the vertical movement boundary has been exceeded; determining a vertical scale factor based at least in part on the extent by which the vertical movement boundary has been exceeded; based at least in part on the new position, determining an extent by which the horizontal movement boundary has been exceeded; determining a horizontal scale factor based at least in part on the extent by which the horizontal movement boundary has been exceeded; and displaying a compression effect in the graphical user interface, wherein the compression effect comprises a visual compression of content in the graphical user interface according to the respective scale factors.” It would be obvious that the touch input can include inputs that were subsequent from the original. Thus, it could include other touch events, which comprises gesture information. Lim teaches finding the current position of the UI elements from a new touch input. Thus, it would include the last keyframe before responding to a new input.). As per claim 17, Lim teaches the claimed: 17. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed in a computer, enables the computer to perform the animation effect generation method according to claim 1. (Lim [0019]: For clarity, only certain selected aspects of the software-based embodiments are described. Other details that are well known in the art are omitted. For example, it should be understood that the software-based embodiments are not limited to any specific computer language or program. Likewise, embodiments of the disclosed technology are not limited to any particular computer or type of hardware. Exemplary computing environments suitable for performing any of the disclosed software-based methods are introduced below.”). As per claim 19, Lim alone does not explicitly teach the claimed limitations. However, Lim in combination with Roard teaches the claimed: 19. The method according to claim 5, wherein the assumed end-point position is determined based on the first elastic boundary, the motion state, a preset deceleration/acceleration, and a preset over-boundary damping coefficient. (Roard teaches accessing acceleration and deceleration parameters for motion of a GUI element in response to a user’s movement. It would be obvious to have the acceleration and deceleration parameters preset to be accessed when calculating the element’s movement. Roard [0035]: “If the distance of the user's movement is greater than the distance threshold, then the computing device 102 opens the menu. The computing device 102 accesses the motion parameters for the pull down menu. The parameters may include a maximum time for the computing device 102 to complete the menu opening and a maximum acceleration for the pull down menu. The computing device 102 may select from several different movements to complete the menu opening. The movements may be ranked such that the computing device 102 selects the highest ranked movement that satisfies the maximum time and maximum acceleration parameters. Each predetermined movement may include different segments. An initial segment may be a deceleration segment, followed by a constant speed segment, followed by another deceleration segment. The initial segment may be an acceleration segment in instances where the maximum time constraint may require that the computing device 102 increase the speed of the menu from the final speed at the point when the user removed the user's finger.” Additionally, Lim teaches a damper factor that is the damping coefficient. The physics engine is used to compute movement when the boundary is exceeded. Lim [0068]: “Constants used by the physics engine in this detailed example include a resistance coefficient (drag_coefficient), a parking speed (parking_speed), a net maximum speed (net_maximum_speed), a spring factor (spring_factor), a damper factor (damper_factor), compression limits (component_compression_limit), a compression percentage (compression_percentage), and a compression offset (compression_offset). Alternatively, other parameters or constants can be used. As another alternative, values described as constants can be modified or allowed to change dynamically. As another alternative, some values described as parameters can be fixed as constants.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the reset acceleration and deceleration as taught by Roard with the system of Lim in order to allow the user to control the rate of change of the motion for the sake of a desired animation effect. As per claim 20, Lim alone does not explicitly teach the claimed limitations. However, Lim in combination with Roard teaches the claimed: 20. The method according to claim 9, wherein the determining the collision time and the collision key frame based on the collision position, to determine the collision information, comprises: determining the collision key frame based on the collision position; (Lim teaches breaking the movement into frames for the different steps. This can include the collision. Lim [0062]: “Animation of movement can be performed, for example, at a system frame rate (e.g., 60 frames per second (fps)) or using an internal timer (e.g., at a minimum value such as 10 fps to ensure good performance and accurate calculations). A physics engine can perform stepped calculations at each frame. Whatever time step is used between frames, a new view of the UI can be drawn at each frame. The view that is drawn can depend on several factors, including whether one or more boundaries in a UI element have been exceeded. For example, if the viewport is out of bounds, a physics engine can determine compression effect information (e.g., a compression point and a compressed size for the content in the UI element being compressed), and the UI system can draw a view of the UI using the compression effect information. If the viewport is not out of bounds, the UI system can draw a view of the UI without a compression effect.”). and determining the collision time based on a first displacement and an easing functional equation corresponding to the target easing function, wherein the first displacement is a displacement from the current position to the collision position. (Roard teaches determining the speed of an element’s path based on a cubic function. Speed is a function of displacement. This would include the displacement from its original position to the user-induced movement to a collision position as taught by Lim. Roard [0011]: “The actions further include determining that a distance of the user-induced path satisfies a threshold distance; and based on determining that the distance of the user-induced path satisfies a threshold distance, determining that the resting location is a location other than an original location of the user interface element. The action of determining an additional path and an additional speed for the user interface element to move along the additional path is further based on a maximum acceleration for the user interface element, a maximum speed of the user interface element, and a maximum time to move the user interface element along the additional path to the resting location. The additional path corresponds to a cubic spline that includes the resting location and a location of the user interface element when the user input ceased. The speed corresponds to a cubic-bezier easing function. The actions further include determining the speed of the user input at a time that the user input has ceased. The user-induced path is along a path that is fixed by the computing device.” If the speed can be calculated at a time, and speed is a function of time, it would be obvious to use the speed determined by the displacement to find the time when the collision point is met.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the easing function as taught by Roard with the system of Lim in order to mathematically model the motion of the interface element and compute the time when it reaches the boundary. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Lim in view of Roard and further in view of Mak (Pub No. US 9075460 B2). As per claim 3, Lim alone does not explicitly teach the claimed limitations. However, Lim in combination with Mak teaches the claimed: 3. The method according to claim 2, wherein the determining the current position and the current velocity of the target element as the current motion state comprises: acquiring an end moment corresponding to the end event and the current position of the target element at the end moment; (Lim [0028]: “Boundary effects (e.g., compression effects) can be presented in different ways. For example, a boundary effect can be displayed for different lengths of time depending on user input and/or design choice. A boundary effect can end, for example, by returning the UI element to a normal (e.g., undistorted) state when a user lifts a finger, stylus or other object to end an interaction with a touchscreen after reaching a boundary, or when an inertia motion has completed. As another example, boundary effects other than compression effects can be used.” When the element reaches a boundary, that is the end event, and it has a state of inertia motion.). acquiring a first moment and a first position corresponding to a last touch event among a plurality of touch events recorded at a predetermined time interval during the two- dimensional touch movement; (Lim [0082]: “More generally, a physics engine can calculate positions and velocities as shown in the pseudocode 1200 in FIG. 12. In the example shown in pseudocode 1200, time_delta represents a length of time since the last time step (e.g., the length of time since the last view of the UI was drawn by the UI system). At each time step in the example shown in pseudocode 1200, the net velocity of the viewport relative to a UI element is checked against a maximum speed (net_maximum_speed) and the component velocities are reduced (while preserving direction) if the net velocity is above the maximum speed. Also at each time step in the example shown in pseudocode 1200, for each axis, if a boundary has not been exceeded, a new velocity and a new position is calculated, taking into account a coefficient that slows the velocity (e.g., drag_coefficient, based on a model of fluid resistance).” The values for the physics engine recorded at each time step are related to movement from touch events.). and if a first time difference between the first moment and the end moment is not less than a preset time difference, determining the current velocity of the target element based on the end moment, the current position, the first moment and the first position. (Mak teaches the movement of screen elements by touch and measured their movements according to physics-based measurements. Mak col. 4 lines 5-45:“FIG. 1B depicts termination of the momentum based zoom gesture on a touch-sensitive display of an electronic device. The pinch gesture 104 is initiated with the contact points at the start positions 106a, 106b. The contact points are then spread apart, as depicted by arrows 108a, 108b, while maintaining contact with the touch-sensitive display 102. The pinch gesture 104 is terminated when the contact points are lifted from the touch-sensitive display 102 at the termination positions 110a, 110b. As depicted in FIG. 1B, the pinch gesture 104, or more particularly each contact point of the gesture 104, may be associated with a distance d.sub.a and d.sub.b between the respective start position 106a, 106b and termination position 110a, 110b. Although the distances d.sub.a and d.sub.b are depicted as being between the respective start positions 106a, 106b and the termination positions 110a, 110b, it is contemplated that a single distance difference may be used. The single distance may be selected from one of the contact points, may be combined from the distance differences of the two contact points, or may be provided as a difference of the distances between the two start positions 106a, 106b and the two termination positions 110a, 110b. In addition to the distance, the gesture 104, or more particularly the contact points of the gesture, may also be associated with an elapsed time (t.sub.a0-t.sub.a1), (t.sub.b0-t.sub.b1), between when the contact point was initiated and terminated. Depending upon how the distance is calculated, it will be appreciated that appropriate elapsed times can be determined. From the distance and elapsed times, a kinetic value such as a velocity or acceleration vector of the gesture 104 can be determined. The velocity or acceleration vector can be used as a momentum value for the pinch gesture. Although described as a momentum value, it is noted that the value may simply be a velocity that is used to determine how `far` the gesture would travel, or for how long the gesture would travel for. It should be appreciated that the term momentum is used to imply that the contact points of zoom gesture have a momentum component to provide information to predict a desired zoom level to continue zooming after the gesture is terminated.” Mak teaches the elapsed time from the beginning and ending of the dragging of a portion of the GUI. This is used to determine velocity and acceleration. The appropriate elapses time is the preset time difference. It must have elapsed so it is not less than that required time.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the measurement of elapsed time of the movement of an object as taught by Mak with the system of Lim in order to ensure that the movement has been appropriately analyzed by allowing sufficient time to pass during the movement of the interface element. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS JOHN FOSTER whose telephone number is (571)272-5053. The examiner can normally be reached Mon, Fri 8:30-6. Tues-Thurs 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS JOHN FOSTER/Examiner, Art Unit 2616 /HAI TAO SUN/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

May 31, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597097
INFORMATION PROCESSING DEVICE, MEASUREMENT SYSTEM, IMAGE PROCESSING METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592031
IMAGE PROCESSING METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586272
Methods and Systems for Transferring Hair Characteristics from a Reference Image to a Digital Image
2y 5m to grant Granted Mar 24, 2026
Patent 12586158
IMAGE SIGNAL PROCESSOR FOR A COMPOSITE CHROMINANCE IMAGE AND A COMPOSITE WHITE IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12586143
METHOD, DEVICE, AND PRODUCT FOR GPU CLUSTER
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
95%
Grant Probability
99%
With Interview (+7.1%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month