DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see applicant’s correspondence, filed 12/25/2025, with respect to the rejection(s) of claim(s) 1, 10 and 20, and claims dependent thereon, under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Schroeder et al.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 4, 5, 10, 11, 13, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over:
Baba et al. (US 2017/0007921 A1) in view of
Rosenberg et al. (US 2018/0095591 A1) and in further view of
Schroeder et al. (Schroeder et al., “Decimation of Triangle Meshes”, ACM, Computer Graphics, 26, 2, July 1992, pp. 65-70)
Regarding claim 10, Baba discloses:
A rendering device (Baba, Fig. 3 and ¶¶42-43) comprising:
A memory and a processor, (Baba, Fig. 4 and ¶43: CPU and memory; ¶106: computer program code in memory configured with processor to perform operations)
Wherein computer programs are stored in memory, and the computer programs, when being executed by the processor cause the processor to: (Baba, Fig. 4 and ¶43: user interface and game program recorded in memory; ¶106: computer program code in memory configured with processor to perform operations)
Construct a mesh model of an elastic object displayed on a display device (Baba, ¶58: mesh region of elastic object – Fig. 10 is example of user interface image of elastic object; Fig. 12 and ¶59: changing shape of mesh object)
Determine, in response to a touch sliding applied on the display device for the elastic object, a deformation position (Baba, Fig. 7 and ¶48: elastic object 420 behaves like an elastic body based on user interaction with the user interface, such as user’s operation on touch panel; ¶52: slide operation based on movement of contact point movement on touch panel; Fig. 11 and ¶59: elastic deformation expressed by moving coordinates of respective vertices of plate-like polygon 700 divided into plurality of meshes wherein when coordinates of arbitrary vertex 720A are moved by slide operation, coordinates of other vertices are also changed by moving vector direction and distance)
Wherein the touch sliding operation is applied on the display device and is used to trigger a deformation of the elastic object on the display device under the action, and the action point is a mesh point in the elastic object that deviates from an original position under the action; (Baba, ¶52: slide operation based on movement of contact point movement on touch panel; Figs. 11-12 and ¶59: elastic deformation of object based on user slide operation of coordinate 720A, wherein the moving distance of other vertices are changed based on moving vector and are weighted based on a distance from vertex 720A)
Determine, based on the deformation position (Baba, Fig. 12 and ¶59: other vertices moved based on slide operation on vertex 720A, wherein the moving distance of other vertices are changed based on moving vector and are weighted based on a distance from vertex 720A; Also Fig. 13 and ¶61: stretch shape along direction of slide operation, with meshes having different stretch factors based on distance from contact end point); and
Implement, based on the motion trajectory of the mesh points in the mesh model of the elastic object, an elastic movement of the mesh model of the elastic object. (Baba, Fig. 7 and ¶48: elastic object 420 stretched based on user interaction with user interface; Figs. 11-12 and ¶59: the elastic deformation of an elastic object is expressed by moving coordinates of respective vertices 720 of a plate-like polygon 700 divided into the plurality of meshes 710)
Baba fails to explicitly teach the determination and use of velocity for the elastic object movement.
Rosenberg discloses:
Determine, in response to an operation applied on the elastic object, a deformation position and velocity of an action point in the mesh model of the elastic object under action of the operation (Rosenberg, ¶¶19-20: input object, e.g. finger, moves from particular location to second location on touch surface, mapped to an area on a virtual surface of a virtual 3D object, and shift area on virtual surface of virtual 3D object inward at a rate or degree corresponding to the force magnitude of the input and in direction corresponding to orientation of the touch sensor surface in real space; ¶36: computer system can map the origin of the force vector to a vertex, triangle, or point in a 3D mesh representing the virtual environment, or the computer system can map the origin of the force vector to a particular pixel within a texture mapped to the 3D mesh representing the virtual environment; ¶45: generating a force vector includes a magnitude within a sampling or scan period – i.e. distance to a period of time is a velocity, also discussed in ¶46 as location points to determine magnitude and generate force from magnitude; ¶56: the computer system moves and/or deforms a virtual surface (or a virtual object) within the virtual model according to a virtual force vector and a physics model defining mechanical creep, elastic deformation, plastic deformation, and/or inertial dynamics, etc. of the virtual surface and/or the virtual object; Also ¶68 discloses determining force magnitude based on relative speed of input)
Determine, based on the deformation position and the velocity of the action point together with an elastic constraint between the mesh points in the mesh model of the elastic object, a motion trajectory of mesh points in the mesh model of the elastic object (Rosenberg, ¶20: input object, e.g. finger, moves from particular location to second location on touch surface, mapped to an area on a virtual surface of a virtual 3D object, and shift area on virtual surface of virtual 3D object inward at a rate or degree corresponding to the force magnitude of the input and in direction corresponding to orientation of the touch sensor surface in real space; ¶56: the computer system moves and/or deforms a virtual surface (or a virtual object) within the virtual model according to a virtual force vector and a physics model defining mechanical creep, elastic deformation, plastic deformation, and/or inertial dynamics, etc. of the virtual surface and/or the virtual object)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
The only remaining limitation not explicitly taught by Baba and Rosenberg is that the mesh model is constructed by selecting points that depict a surface or an outline of the object (note claim does not specify quantity) and neglecting points (note claim does not specify quantity) that depict an internal structure of object. Examiner notes the limitation is merely directed to the construction the mesh model and is not otherwise tied to the remaining limitations in any functional manner. Once a mesh model is constructed, the claim then goes on to recite the determination of the interaction with the constructed mesh model. This appears to merely be a feature of constructing a mesh of an object with simplification of the mesh model itself, and the construction is not determined or otherwise tied to the deformation calculations in any way other than merely forming the data as the mesh to which they are applied. In other words, a simplified mesh construction of on object would be read by the claim, in which then is combinable as the representative mesh model with the remaining limitations taught by Baba and Rosenberg. Furthermore, the model merely requires that at least some points of an “internal structure” are “neglected”, where the neglected points can be interpreted as any points in a body of 3D object or even internal structure of a flat object within the borders of an outline or bounding region of the object, such as a simplified mesh that neglects at least two or more point within the internal structure of the surface mesh.
Schroeder discloses:
Construct a mesh model of an object displayed on a display device comprising selecting points that depict a surface or an outline of the object and neglecting points that depict an internal structure of the object; (Schroeder, Abstract:
Computer graphics applications routinely generate geometric models consisting of large numbers of triangles. We present an algorithm that significantly reduces the number of triangles required to model a physical or abstract object. The algorithm makes multiple passes over an existing triangle mesh, using local geometry and topology to remove vertices that pas a distance or angle criterion. The holes left by the vertex removal are patched using a local triangulation process. The decimation algorithm has been implemented in a general scientific visualization system as a general network filter.
p. 66, section 3 “The Decimation Algorithm”, ¶1 discloses vertices of the decimated mesh can be a subset of the original vertices, with requirements of preserving original topology; p 66, section 3.1 Overview:
Multiple passes are made over all vertices in the mesh. During a pass, each vertex is a candidate for removal and, if it meets the specified decimation criteria the vertex and all triangles that use the vertex are deleted. The resulting hole in the mesh is patched by forming a local triangulation. The vertex removal process repeats, with possible adjustment of the decimation criteria, until some termination condition is met.
Accordingly, Schroeder discloses the generation of a simplified mesh that neglects interior vertices, generating holes which are then tessellated using remaining vertices; p. 66, right column, 3rd paragraph discloses vertex as boundary vertex – i.e. boundary)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, by further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, using known electronic interfacing and programming techniques. The mesh simplification results in an improved mesh based interactive modeling system by allowing for faster and more efficient processing while maintaining a good approximation of the data, balancing efficiency with good computation effects, reducing consumption of expensive processing resources and time.
Regarding claim 1, the device of claim 10 performs the method of claim 1 and as such, claim 1 is rejected based on the same rationale as claim 10 set forth above.
Regarding claim 11, Baba further discloses:
Wherein a touch point corresponding to the touch sliding operation is used to determine the action point, and a sliding trajectory corresponding to the touch sliding operation is used to determine the deformation position (Baba, ¶43: input unit detects slide or swipe operation on touch panel by user; Fig. 7 and ¶48: elastic object 420 behaves like an elastic body based on user interaction with the user interface, such as user’s operation on touch panel, including contact start point; Fig. 12 and ¶59 discusses arbitrary vertex 720A moved by slide operation)
Rosenberg further discloses:
Wherein a touch point corresponding to the touch sliding operation is used to determine the action point, and a sliding trajectory corresponding to the touch sliding operation is used to determine the deformation position and the velocity of the action point (Rosenberg, ¶27: input on touch surface having one or more X,Y locations on touch sensor surface; ¶45: generating a force vector includes a magnitude within a sampling or scan period – i.e. distance to a period of time is a velocity, also discussed in ¶46 as location points to determine magnitude and generate force from magnitude; ¶56: the computer system moves and/or deforms a virtual surface (or a virtual object) within the virtual model according to a virtual force vector and a physics model defining mechanical creep, elastic deformation, plastic deformation, and/or inertial dynamics, etc. of the virtual surface and/or the virtual object; Also ¶68 discloses determining force magnitude based on relative speed of input)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
Regarding claim 2, the device of claim 11 performs the method of claim 2 and as such, claim 2 is rejected based on the same rationale as claim 11 set forth above.
Regarding claim 13, Baba further discloses:
Wherein in a case that the touch point corresponding to the touch sliding operation is located on the elastic object, the action point is a mesh point corresponding to the touch point on the mesh model (Baba, Fig. 7 and ¶48: elastic object 420 with base portion 430 positioned in the contact start point of the slide operation; Fig. 12 and ¶59: figure shows part of elastic object, with coordinate vertex changed based on slide movement, shown with touch point corresponding to mesh point 720A)
Regarding claim 4, the device of claim 13 performs the method of claim 4 and as such, claim 4 is rejected based on the same rationale as claim 13 set forth above.
Regarding claim 14, Baba further discloses:
When a user performs the touch sliding operation, map (Baba, ¶43: input unit detects slide or swipe operation on touch panel by user; Fig. 7 and ¶48: elastic object 420 behaves like an elastic body based on user interaction with the user interface, such as user’s operation on touch panel, including contact start point; Fig. 12 and ¶59 discusses arbitrary vertex 720A moved by slide operation)
Rosenberg further discloses:
When a user performs the touch sliding operation, map a sliding velocity and a sliding distance applied by the user to the action point, to obtain the deformation position and the velocity of the action point (Rosenberg, ¶36: computer system can map the origin of the force vector to a vertex, triangle, or point in a 3D mesh representing the virtual environment, or the computer system can map the origin of the force vector to a particular pixel within a texture mapped to the 3D mesh representing the virtual environment; ¶45: generating a force vector includes a magnitude within a sampling or scan period – i.e. distance to a period of time is a velocity, also discussed in ¶46 as location points to determine magnitude and generate force from magnitude; ¶56: the computer system moves and/or deforms a virtual surface (or a virtual object) within the virtual model according to a virtual force vector and a physics model defining mechanical creep, elastic deformation, plastic deformation, and/or inertial dynamics, etc. of the virtual surface and/or the virtual object; Also ¶68 discloses determining force magnitude based on relative speed of input)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
Regarding claim 5, the device of claim 14 performs the method of claim 5 and as such, claim 5 is rejected based on the same rationale as claim 14 set forth above.
Claim(s) 3 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over:
Baba et al. (US 2017/0007921 A1) in view of
Rosenberg et al. (US 2018/0095591 A1) and
Schroeder et al. (Schroeder et al., “Decimation of Triangle Meshes”, ACM, Computer Graphics, 26, 2, July 1992, pp. 65-70) in further view of
Chen (US 2014/0292802 A1).
Regarding claim 12, the limitations included from claim 11 are rejected based on the same rationale as claim 11 set forth above. Further regarding claim 12, Baba further discloses:
a mesh model of an elastic object (Baba, ¶58: mesh region of elastic object; Fig. 12 and ¶59: changing shape of mesh object)
Chen discloses:
Wherein in a case that the touch point corresponding to the touch sliding operation is not located on the object, the action point is a predetermined mesh point on the mesh model of the object (Chen, ¶29: A touch displacement may be received 310 in the mesh-fitting method 300. The touch displacement may be effectuated by a finger touch, a mouse action, a digital-pen device or other interface method. A touch displacement may correspond to a user touch gesture commencing at an initial touch point and terminating at a final touch point. A nearest intermediate control point to the initial touch point may be determined 312, and the mesh edge, top edge or bottom edge, associated with the nearest intermediate control point may be adjusted 314 accordingly by moving the nearest intermediate control point a displacement commensurate with the received touch displacement. For example, a received touch displacement of a first length in a first direction may be applied to the nearest intermediate control point.)
Baba, Rosenberg and Chen are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, and further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, by further using the designated point away from the input point for controlling the object as provided by Chen, using known electronic interfacing and programming techniques. The modification allows for an improved user interface by allowing easier control of objects by allowing for some natural imprecision of user input, without requiring absolute rigidity of corresponding touch input to control point that would otherwise render the system more difficult to use.
Regarding claim 3, the device of claim 12 performs the method of claim 3 and as such, claim 3 is rejected based on the same rationale as claim 12 set forth above.
Claim(s) 7, 9, 16, 18, 20 and 22-26 is/are rejected under 35 U.S.C. 103 as being unpatentable over:
Baba et al. (US 2017/0007921 A1) in view of
Rosenberg et al. (US 2018/0095591 A1) and
Schroeder et al. (Schroeder et al., “Decimation of Triangle Meshes”, ACM, Computer Graphics, 26, 2, July 1992, pp. 65-70) in further view of
Otsuka et al. (US 2014/0213332 A1)
Examiner further cites Korba et al. (US 2014/0347369 A1) for evidentiary support.
Regarding claim 20, Baba further discloses:
A non-transitory computer readable storage medium having computer programs stored thereon, wherein the computer programs, when being executed by a processor cause the processor to perform a method. (Baba, Fig. 4 and ¶43: CPU and memory; ¶106: computer program code in memory configured with processor to perform operations; ¶106: computer program code in memory configured with processor to perform operations)
constructing a mesh model of an elastic object displayed on a display device (Baba, ¶58: mesh region of elastic object – Fig. 10 is example of user interface image of elastic object; Fig. 12 and ¶59: changing shape of mesh object)
Determining, in response to a touch sliding applied on the display device for the elastic object, a deformation position (Baba, Fig. 7 and ¶48: elastic object 420 behaves like an elastic body based on user interaction with the user interface, such as user’s operation on touch panel; ¶52: slide operation based on movement of contact point movement on touch panel; Fig. 11 and ¶59: elastic deformation expressed by moving coordinates of respective vertices of plate-like polygon 700 divided into plurality of meshes wherein when coordinates of arbitrary vertex 720A are moved by slide operation, coordinates of other vertices are also changed by moving vector direction and distance) comprising:
Determining, based on the deformation position of the action point at a moment when the touch sliding operation is stopped on the display device together with the elastic constraint between the action point and other mesh points, force information of the action point at the deformation position, and (Baba, ¶75: after Step S106, when the non-contact determination unit 860 determines that the user has lifted the finger off the touch panel, as described above with reference to FIG. 8, the restored-object forming unit 870 contracts the elastic object that has been elastically deformed stepwise toward the start point in accordance with a restoring force of the elastic object.)
Determining, based on the deformation position, (Baba, ¶74 discloses deformation when user continuously causes the finger to make a slide action up to end point 2 – see fig. 22; ¶85: after Step S106, when the non-contact determination unit 860 determines that the user has lifted the finger off the touch panel, as described above with reference to FIG. 8, the restored-object forming unit 870 contracts the elastic object that has been elastically deformed stepwise toward the start point in accordance with a restoring force of the elastic object, to thereby restore the initial shape illustrated in FIG. 6. – i.e. the velocity and deformation of the action point is determined after the touch sliding has stopped as the object is restored to the initial shape)
Wherein the touch sliding operation is applied on the display device and is used to trigger a deformation of the elastic object on the display device under the action, and the action point is a mesh point in the elastic object that deviates from an original position under the action; (Baba, ¶52: slide operation based on movement of contact point movement on touch panel; Figs. 11-12 and ¶59: elastic deformation of object based on user slide operation of coordinate 720A, wherein the moving distance of other vertices are changed based on moving vector and are weighted based on a distance from vertex 720A)
Determine, based on the deformation position (Baba, Fig. 12 and ¶59: other vertices moved based on slide operation on vertex 720A, wherein the moving distance of other vertices are changed based on moving vector and are weighted based on a distance from vertex 720A; Also Fig. 13 and ¶61: stretch shape along direction of slide operation, with meshes having different stretch factors based on distance from contact end point); and
Implement, based on the motion trajectory of the mesh points in the mesh model of the elastic object, an elastic movement of the mesh model of the elastic object. (Baba, Fig. 7 and ¶48: elastic object 420 stretched based on user interaction with user interface; Figs. 11-12 and ¶59: the elastic deformation of an elastic object is expressed by moving coordinates of respective vertices 720 of a plate-like polygon 700 divided into the plurality of meshes 710)
The only aspect of the claim not explicitly taught by Baba is the determination and use of velocity for the elastic object movement.
Rosenberg discloses:
Determine, in response to an operation applied on the elastic object, a deformation position and velocity of an action point in the mesh model of the elastic object under action of the operation (Rosenberg, ¶¶19-20: input object, e.g. finger, moves from particular location to second location on touch surface, mapped to an area on a virtual surface of a virtual 3D object, and shift area on virtual surface of virtual 3D object inward at a rate or degree corresponding to the force magnitude of the input and in direction corresponding to orientation of the touch sensor surface in real space; ¶36: computer system can map the origin of the force vector to a vertex, triangle, or point in a 3D mesh representing the virtual environment, or the computer system can map the origin of the force vector to a particular pixel within a texture mapped to the 3D mesh representing the virtual environment; ¶45: generating a force vector includes a magnitude within a sampling or scan period – i.e. distance to a period of time is a velocity, also discussed in ¶46 as location points to determine magnitude and generate force from magnitude; ¶56: the computer system moves and/or deforms a virtual surface (or a virtual object) within the virtual model according to a virtual force vector and a physics model defining mechanical creep, elastic deformation, plastic deformation, and/or inertial dynamics, etc. of the virtual surface and/or the virtual object; Also ¶68 discloses determining force magnitude based on relative speed of input)
Determining, based on the deformation position, the velocity and the force information of the action point the velocity and the deformation position of the action point (Rosenberg, ¶20: in response to application of a finger on a touch sensor surface, the particular location on the touch sensor surface is mapped to an area on a virtual surface of a virtual three-dimensional object, shifting the area of the virtual three-dimensional object inward at a rate or degree corresponding to the force magnitude of the input in a direction corresponding to the orientation of the touch sensor surface in real space)
Determine, based on the deformation position and the velocity of the action point together with an elastic constraint between the mesh points in the mesh model of the elastic object, a motion trajectory of mesh points in the mesh model of the elastic object (Rosenberg, ¶20: input object, e.g. finger, moves from particular location to second location on touch surface, mapped to an area on a virtual surface of a virtual 3D object, and shift area on virtual surface of virtual 3D object inward at a rate or degree corresponding to the force magnitude of the input and in direction corresponding to orientation of the touch sensor surface in real space; ¶56: the computer system moves and/or deforms a virtual surface (or a virtual object) within the virtual model according to a virtual force vector and a physics model defining mechanical creep, elastic deformation, plastic deformation, and/or inertial dynamics, etc. of the virtual surface and/or the virtual object)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
Schroeder discloses:
Construct a mesh model of an object displayed on a display device comprising selecting points that depict a surface or an outline of the object and neglecting points that depict an internal structure of the object; (Schroeder, Abstract:
Computer graphics applications routinely generate geometric models consisting of large numbers of triangles. We present an algorithm that significantly reduces the number of triangles required to model a physical or abstract object. The algorithm makes multiple passes over an existing triangle mesh, using local geometry and topology to remove vertices that pas a distance or angle criterion. The holes left by the vertex removal are patched using a local triangulation process. The decimation algorithm has been implemented in a general scientific visualization system as a general network filter.
p. 66, section 3 “The Decimation Algorithm”, ¶1 discloses vertices of the decimated mesh can be a subset of the original vertices, with requirements of preserving original topology; p 66, section 3.1 Overview:
Multiple passes are made over all vertices in the mesh. During a pass, each vertex is a candidate for removal and, if it meets the specified decimation criteria the vertex and all triangles that use the vertex are deleted. The resulting hole in the mesh is patched by forming a local triangulation. The vertex removal process repeats, with possible adjustment of the decimation criteria, until some termination condition is met.
Accordingly, Schroeder discloses the generation of a simplified mesh that neglects interior vertices, generating holes which are then tessellated using remaining vertices; p. 66, right column, 3rd paragraph discloses vertex as boundary vertex – i.e. boundary)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, by further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, using known electronic interfacing and programming techniques. The mesh simplification results in an improved mesh based interactive modeling system by allowing for faster and more efficient processing while maintaining a good approximation of the data, balancing efficiency with good computation effects, reducing consumption of expensive processing resources and time.
Accordingly, Baba modified by Rosenberg teaches the concept of a deformable mesh object on a display device, wherein a force is determined based on user input, and the mesh is deformed based on the resulting force from the input. The only element missing is that generation of a force vector based on the touch sliding input.
Otsuka teaches:
Determining, based on the position, the velocity and the force information of the action point at the moment when the touch sliding operation is stopped on the display device, the velocity and the position of the action point after the touch sliding operation is stopped (Otsuka, ¶26: User actions such as a direction of finger swipe against a touchscreen, an acceleration/speed of finger swipe against the touchscreen, a length of finger swipe against the touchscreen, and/or a downward force exerted on the touchscreen, may determine the trajectory and/or momentum of the projectile; ¶27: the trajectory of a projectile is determined by an input made on a touchscreen, e.g., swiping a finger from a first point to a second point on the touchscreen. A velocity vector having a direction component and a speed component can be obtained from the direction and speed of the swipe, which can then be used to affect the trajectory of the projectile; ¶52: The touchscreen receives an input comprising a finger swipe, from which a swipe direction and a swipe speed are obtained (step 164). The swiping motion is detected by a sensing unit 810 (see FIG. 8) and is translated to a velocity vector by a controller 802 (see FIG. 8), which is used to determine the trajectory of the projectile)
Baba, Rosenberg and Otsuka are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, and further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, by using the technique for obtaining a force input vector to apply to a virtual object based on a touch and release of a touch screen as provided by Otsuka, using known electronic interfacing and programming techniques. One of ordinary skill in the art would have understood that the different force vectors can be attributed to a single action point on the mesh, as this is merely a mathematical computation of forces as applied as a total to a point, whether the force is derived from the internal forces of the object itself or from a particular user input that generates a force vector of both direction and magnitude, as exemplified by Korba, which teaches that a mesh node can be used to modify a virtual object based on the sum of the applied forces to the node itself, including elastic forces applied to the node and user input forces in the form of a gesture (Korba, ¶77: calculation of resisting force for node; ¶¶78-79: calculate net force acting on node by obtaining vector sum of all virtual forces acting on node; ¶81: vector sum of forces by all virtual springs connected to node; ¶83 discloses calculation of total force on node accounting for user force and sum of other forces, such as virtual springs; ¶135: touch screen receiving at least one touch through body part of user, e.g. finger). The modification of Baba and Rosenberg with the force vector generation based on the input of Otsuka merely substitutes one type of virtual force input for manipulating a virtual object on a touch screen for another, yielding predictable results of using touch and release on a touch screen to generate a force vector for control a physical response of a virtual object. Baba and Rosenberg already disclose that it is known that a movement or deformation of a mesh is a result of a number of forces applied to a virtual point, including user input force. Otsuka merely teaches that it is a known technique to obtain a force based on a user input such as a swipe on a touch screen, generating the force upon the end of the touch gesture. The modification results in an improved user interface by allowing for coordinated physical effects based on a different common touch screen user input (i.e. swipe) for a more interactive effect and more intuitive user interface response allowing interactive touch screen input for modifying an otherwise force-based interactive virtual object.
Regarding claim 22, Baba further discloses:
Wherein a touch point corresponding to the touch sliding operation is used to determine the action point, and a sliding trajectory corresponding to the touch sliding operation is used to determine the deformation position (Baba, ¶43: input unit detects slide or swipe operation on touch panel by user; Fig. 7 and ¶48: elastic object 420 behaves like an elastic body based on user interaction with the user interface, such as user’s operation on touch panel, including contact start point; Fig. 12 and ¶59 discusses arbitrary vertex 720A moved by slide operation)
Rosenberg further discloses:
Wherein a touch point corresponding to the touch sliding operation is used to determine the action point, and a sliding trajectory corresponding to the touch sliding operation is used to determine the deformation position and the velocity of the action point (Rosenberg, ¶27: input on touch surface having one or more X,Y locations on touch sensor surface; ¶45: generating a force vector includes a magnitude within a sampling or scan period – i.e. distance to a period of time is a velocity, also discussed in ¶46 as location points to determine magnitude and generate force from magnitude; ¶56: the computer system moves and/or deforms a virtual surface (or a virtual object) within the virtual model according to a virtual force vector and a physics model defining mechanical creep, elastic deformation, plastic deformation, and/or inertial dynamics, etc. of the virtual surface and/or the virtual object; Also ¶68 discloses determining force magnitude based on relative speed of input)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
Regarding claim 23, the limitations included from claim 1 are rejected based on the same rationale as claim 1 set forth above and incorporated herein. Further regarding claim 23, Baba modified by Rosenberg further discloses:
Wherein determining a deformation position and a velocity of an action point in the mesh model of the elastic object under action of the deformation triggering operation (Rosenberg, ¶¶19-20: input object, e.g. finger, moves from particular location to second location on touch surface, mapped to an area on a virtual surface of a virtual 3D object, and shift area on virtual surface of virtual 3D object inward at a rate or degree corresponding to the force magnitude of the input and in direction corresponding to orientation of the touch sensor surface in real space; ¶36: computer system can map the origin of the force vector to a vertex, triangle, or point in a 3D mesh representing the virtual environment, or the computer system can map the origin of the force vector to a particular pixel within a texture mapped to the 3D mesh representing the virtual environment; ¶45: generating a force vector includes a magnitude within a sampling or scan period – i.e. distance to a period of time is a velocity, also discussed in ¶46 as location points to determine magnitude and generate force from magnitude; ¶56: the computer system moves and/or deforms a virtual surface (or a virtual object) within the virtual model according to a virtual force vector and a physics model defining mechanical creep, elastic deformation, plastic deformation, and/or inertial dynamics, etc. of the virtual surface and/or the virtual object; Also ¶68 discloses determining force magnitude based on relative speed of input) comprises:
Determining, based on the velocity and the deformation position of the action point (Rosenberg, ¶20: in response to application of a finger on a touch sensor surface, the particular location on the touch sensor surface is mapped to an area on a virtual surface of a virtual three-dimensional object, shifting the area of the virtual three-dimensional object inward at a rate or degree corresponding to the force magnitude of the input in a direction corresponding to the orientation of the touch sensor surface in real space)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
Accordingly, Baba modified by Rosenberg teaches the concept of a deformable mesh object on a display device, wherein a force is determined based on user input, and the mesh is deformed based on the resulting force from the input. The only element missing is that generation of a force vector based on the full input motion.
Otsuka teaches:
Determining, based on the velocity and position of the action point at a moment when the touch sliding operation is stopped, the velocity and the deformation position of the action point after the touch sliding operation is stopped (Otsuka, ¶26: User actions such as a direction of finger swipe against a touchscreen, an acceleration/speed of finger swipe against the touchscreen, a length of finger swipe against the touchscreen, and/or a downward force exerted on the touchscreen, may determine the trajectory and/or momentum of the projectile; ¶27: the trajectory of a projectile is determined by an input made on a touchscreen, e.g., swiping a finger from a first point to a second point on the touchscreen. A velocity vector having a direction component and a speed component can be obtained from the direction and speed of the swipe, which can then be used to affect the trajectory of the projectile; ¶52: The touchscreen receives an input comprising a finger swipe, from which a swipe direction and a swipe speed are obtained (step 164). The swiping motion is detected by a sensing unit 810 (see FIG. 8) and is translated to a velocity vector by a controller 802 (see FIG. 8), which is used to determine the trajectory of the projectile)
Baba, Rosenberg and Otsuka are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, and further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, by using the technique for obtaining a force input vector to apply to a virtual object based on a touch and release of a touch screen as provided by Otsuka, using known electronic interfacing and programming techniques. One of ordinary skill in the art would have understood that the different force vectors can be attributed to a single action point on the mesh, as this is merely a mathematical computation of forces as applied as a total to a point, whether the force is derived from the internal forces of the object itself or from a particular user input that generates a force vector of both direction and magnitude, as exemplified by Korba, which teaches that a mesh node can be used to modify a virtual object based on the sum of the applied forces to the node itself, including elastic forces applied to the node and user input forces in the form of a gesture (Korba, ¶77: calculation of resisting force for node; ¶¶78-79: calculate net force acting on node by obtaining vector sum of all virtual forces acting on node; ¶81: vector sum of forces by all virtual springs connected to node; ¶83 discloses calculation of total force on node accounting for user force and sum of other forces, such as virtual springs; ¶135: touch screen receiving at least one touch through body part of user, e.g. finger). The modification of Baba and Rosenberg with the force vector generation based on the input of Otsuka merely substitutes one type of virtual force input for manipulating a virtual object on a touch screen for another, yielding predictable results of using touch and release on a touch screen to generate a force vector for control a physical response of a virtual object. Baba and Rosenberg already disclose that it is known that a movement or deformation of a mesh is a result of a number of forces applied to a virtual point, including user input force. Otsuka merely teaches that it is a known technique to obtain a force based on a user input such as a swipe on a touch screen, generating the force upon the end of the touch gesture. The modification results in an improved user interface by allowing for coordinated physical effects based on a different common touch screen user input (i.e. swipe) for a more interactive effect and more intuitive user interface response allowing interactive touch screen input for modifying an otherwise force-based interactive virtual object.
Regarding claim 7, Baba further discloses:
Determine, based on the deformation position (Baba, ¶75: after Step S106, when the non-contact determination unit 860 determines that the user has lifted the finger off the touch panel, as described above with reference to FIG. 8, the restored-object forming unit 870 contracts the elastic object that has been elastically deformed stepwise toward the start point in accordance with a restoring force of the elastic object.)
Rosenberg further discloses:
Determine, based on the deformation position and the velocity of the action point at the moment when the touch sliding operation is stopped together with a predetermined gravity at the action point, the velocity and the position of the action point after the touch sliding operation is stopped (Rosenberg, ¶36: define a specific location of the origin of the force vector within the virtual environment based on the position of an input on the touch sensor surface, such as the centroid of the area of an input or the location of the peak force measured within the area of the input, in Block S142, e.g., the computer system can map the origin of the force vector to a vertex, triangle, or point in a 3D mesh representing the virtual environment; ¶50 discloses user providing input in a rapid forward position and release of one or more inputs, such that computer system can generate a motion vector for the virtual baseball corresponding to the real trajectory of the input device in real space according to the motion vector upon release of the inputs; ¶56: virtual object deformed according to a virtual force vector and physical model including elastic deformation and inertial dynamics)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
The combination of references including Otsuka discloses the determining based on the velocity and the deformation position of the action point at a moment when the touch sliding operation is stopped, the velocity and the deformation position of the action point after the touch sliding operation is stopped comprises the recited limitation, including at the moment the touch sliding operation is stopped, as taught by Otsuka above for claim 23.
Baba, Rosenberg and Otsuka are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, by using the technique for obtaining a force input vector to apply to a virtual object based on a touch and release of a touch screen as provided by Otsuka, using known electronic interfacing and programming techniques. One of ordinary skill in the art would have understood that the different force vectors can be attributed to a single action point on the mesh, as this is merely a mathematical computation of forces as applied as a total to a point, whether the force is derived from the internal forces of the object itself or from a particular user input that generates a force vector of both direction and magnitude, as exemplified by Korba, which teaches that a mesh node can be used to modify a virtual object based on the sum of the applied forces to the node itself, including elastic forces applied to the node and user input forces in the form of a gesture (Korba, ¶77: calculation of resisting force for node; ¶¶78-79: calculate net force acting on node by obtaining vector sum of all virtual forces acting on node; ¶81: vector sum of forces by all virtual springs connected to node; ¶83 discloses calculation of total force on node accounting for user force and sum of other forces, such as virtual springs; ¶135: touch screen receiving at least one touch through body part of user, e.g. finger). The modification of Baba and Rosenberg with the force vector generation based on the input of Otsuka merely substitutes one type of virtual force input for manipulating a virtual object on a touch screen for another, yielding predictable results of using touch and release on a touch screen to generate a force vector for control a physical response of a virtual object. Baba and Rosenberg already disclose that it is known that a movement or deformation of a mesh is a result of a number of forces applied to a virtual point, including user input force. Otsuka merely teaches that it is a known technique to obtain a force based on a user input such as a swipe on a touch screen, generating the force upon the end of the touch gesture. The modification results in an improved user interface by allowing for coordinated physical effects based on a different common touch screen user input (i.e. swipe) for a more interactive effect and more intuitive user interface response allowing interactive touch screen input for modifying an otherwise force-based interactive virtual object.
Regarding claim 24, Baba further discloses:
Wherein the determining, based on the velocity and the deformation position of the action point at a moment when the touch sliding operation is stopped, the velocity and the deformation position of the action point after the touch sliding operation is stopped (rejected as in parent claim 23 set forth above) comprises:
determining, based on the deformation position of the action point at a moment when the touch sliding operation is stopped on the display device together with the elastic constraint between the action point and other mesh points, force information of the action point at the deformation position, and (Baba, ¶75: after Step S106, when the non-contact determination unit 860 determines that the user has lifted the finger off the touch panel, as described above with reference to FIG. 8, the restored-object forming unit 870 contracts the elastic object that has been elastically deformed stepwise toward the start point in accordance with a restoring force of the elastic object.)
Determining, based on the deformation position, (Baba, ¶74 discloses deformation when user continuously causes the finger to make a slide action up to end point 2 – see fig. 22; ¶85: after Step S106, when the non-contact determination unit 860 determines that the user has lifted the finger off the touch panel, as described above with reference to FIG. 8, the restored-object forming unit 870 contracts the elastic object that has been elastically deformed stepwise toward the start point in accordance with a restoring force of the elastic object, to thereby restore the initial shape illustrated in FIG. 6. – i.e. the velocity and deformation of the action point is determined after the touch sliding has stopped as the object is restored to the initial shape)
Otsuka teaches:
Determining, based on the deformation position of the action point at the moment when the touch sliding operation is stopped together with the elastic constraint between the action point and other mesh points, force information of the action point at the deformation position and determining, based on the position, the velocity and the force information of the action point at the moment when the touch sliding operation is stopped on the display device, the velocity and the position of the action point after the touch sliding operation is stopped (Otsuka, ¶26: User actions such as a direction of finger swipe against a touchscreen, an acceleration/speed of finger swipe against the touchscreen, a length of finger swipe against the touchscreen, and/or a downward force exerted on the touchscreen, may determine the trajectory and/or momentum of the projectile; ¶27: the trajectory of a projectile is determined by an input made on a touchscreen, e.g., swiping a finger from a first point to a second point on the touchscreen. A velocity vector having a direction component and a speed component can be obtained from the direction and speed of the swipe, which can then be used to affect the trajectory of the projectile; ¶52: The touchscreen receives an input comprising a finger swipe, from which a swipe direction and a swipe speed are obtained (step 164). The swiping motion is detected by a sensing unit 810 (see FIG. 8) and is translated to a velocity vector by a controller 802 (see FIG. 8), which is used to determine the trajectory of the projectile)
Baba, Rosenberg and Otsuka are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, and further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, by using the technique for obtaining a force input vector to apply to a virtual object based on a touch and release of a touch screen as provided by Otsuka, using known electronic interfacing and programming techniques. One of ordinary skill in the art would have understood that the different force vectors can be attributed to a single action point on the mesh, as this is merely a mathematical computation of forces as applied as a total to a point, whether the force is derived from the internal forces of the object itself or from a particular user input that generates a force vector of both direction and magnitude, as exemplified by Korba, which teaches that a mesh node can be used to modify a virtual object based on the sum of the applied forces to the node itself, including elastic forces applied to the node and user input forces in the form of a gesture (Korba, ¶77: calculation of resisting force for node; ¶¶78-79: calculate net force acting on node by obtaining vector sum of all virtual forces acting on node; ¶81: vector sum of forces by all virtual springs connected to node; ¶83 discloses calculation of total force on node accounting for user force and sum of other forces, such as virtual springs; ¶135: touch screen receiving at least one touch through body part of user, e.g. finger). The modification of Baba and Rosenberg with the force vector generation based on the input of Otsuka merely substitutes one type of virtual force input for manipulating a virtual object on a touch screen for another, yielding predictable results of using touch and release on a touch screen to generate a force vector for control a physical response of a virtual object. Baba and Rosenberg already disclose that it is known that a movement or deformation of a mesh is a result of a number of forces applied to a virtual point, including user input force. Otsuka merely teaches that it is a known technique to obtain a force based on a user input such as a swipe on a touch screen, generating the force upon the end of the touch gesture. The modification results in an improved user interface by allowing for coordinated physical effects based on a different common touch screen user input (i.e. swipe) for a more interactive effect and more intuitive user interface response allowing interactive touch screen input for modifying an otherwise force-based interactive virtual object.
Regarding claim 9, Baba modified by Rosenberg further disclose:
determining, based on the deformation position, the velocity and the force information of the action point at a previous moment, the deformation position, the velocity and the force information of the action point at a moment, after the touch sliding operation is stopped (Rosenberg, ¶36: define a specific location of the origin of the force vector within the virtual environment based on the position of an input on the touch sensor surface, such as the centroid of the area of an input or the location of the peak force measured within the area of the input, in Block S142, e.g., the computer system can map the origin of the force vector to a vertex, triangle, or point in a 3D mesh representing the virtual environment; ¶50 discloses user providing input in a rapid forward position and release of one or more inputs, such that computer system can generate a motion vector for the virtual baseball corresponding to the real trajectory of the input device in real space according to the motion vector upon release of the inputs; ¶56: virtual object deformed according to a virtual force vector and physical model including elastic deformation and inertial dynamics)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
Accordingly, Baba modified by Rosenberg teaches the concept of a deformable mesh object on a display device, wherein a force is determined based on user input, and the mesh is deformed based on the resulting force from the input. The only element missing is that generation of a force vector based on the touch sliding input.
Otsuka teaches:
Determining, based on the deformation position, the velocity and the force information of the action point at the moment when the touch sliding operation is stopped on the display device, the velocity and the position of the action point after the touch sliding operation is stopped (Otsuka, ¶26: User actions such as a direction of finger swipe against a touchscreen, an acceleration/speed of finger swipe against the touchscreen, a length of finger swipe against the touchscreen, and/or a downward force exerted on the touchscreen, may determine the trajectory and/or momentum of the projectile; ¶27: the trajectory of a projectile is determined by an input made on a touchscreen, e.g., swiping a finger from a first point to a second point on the touchscreen. A velocity vector having a direction component and a speed component can be obtained from the direction and speed of the swipe, which can then be used to affect the trajectory of the projectile; ¶52: The touchscreen receives an input comprising a finger swipe, from which a swipe direction and a swipe speed are obtained (step 164). The swiping motion is detected by a sensing unit 810 (see FIG. 8) and is translated to a velocity vector by a controller 802 (see FIG. 8), which is used to determine the trajectory of the projectile)
Baba, Rosenberg and Otsuka are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, and further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, by using the technique for obtaining a force input vector to apply to a virtual object based on a touch and release of a touch screen as provided by Otsuka, using known electronic interfacing and programming techniques. One of ordinary skill in the art would have understood that the different force vectors can be attributed to a single action point on the mesh, as this is merely a mathematical computation of forces as applied as a total to a point, whether the force is derived from the internal forces of the object itself or from a particular user input that generates a force vector of both direction and magnitude, as exemplified by Korba, which teaches that a mesh node can be used to modify a virtual object based on the sum of the applied forces to the node itself, including elastic forces applied to the node and user input forces in the form of a gesture (Korba, ¶77: calculation of resisting force for node; ¶¶78-79: calculate net force acting on node by obtaining vector sum of all virtual forces acting on node; ¶81: vector sum of forces by all virtual springs connected to node; ¶83 discloses calculation of total force on node accounting for user force and sum of other forces, such as virtual springs; ¶135: touch screen receiving at least one touch through body part of user, e.g. finger). The modification of Baba and Rosenberg with the force vector generation based on the input of Otsuka merely substitutes one type of virtual force input for manipulating a virtual object on a touch screen for another, yielding predictable results of using touch and release on a touch screen to generate a force vector for control a physical response of a virtual object. Baba and Rosenberg already disclose that it is known that a movement or deformation of a mesh is a result of a number of forces applied to a virtual point, including user input force. Otsuka merely teaches that it is a known technique to obtain a force based on a user input such as a swipe on a touch screen, generating the force upon the end of the touch gesture. The modification results in an improved user interface by allowing for coordinated physical effects based on a different common touch screen user input (i.e. swipe) for a more interactive effect and more intuitive user interface response allowing interactive touch screen input for modifying an otherwise force-based interactive virtual object.
Regarding claim 25, the limitations included from claim 10 are rejected based on the same rationale as claim 10 set forth above and incorporated herein. Further regarding claim 25, Baba modified by Rosenberg further discloses:
Determining, based on the velocity and the deformation position of the action point (Rosenberg, ¶20: in response to application of a finger on a touch sensor surface, the particular location on the touch sensor surface is mapped to an area on a virtual surface of a virtual three-dimensional object, shifting the area of the virtual three-dimensional object inward at a rate or degree corresponding to the force magnitude of the input in a direction corresponding to the orientation of the touch sensor surface in real space)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
Accordingly, Baba modified by Rosenberg teaches the concept of a deformable mesh object on a display device, wherein a force is determined based on user input, and the mesh is deformed based on the resulting force from the input. The only element missing is that generation of a force vector based on the full input motion.
Otsuka teaches:
Determining, based on the velocity and position of the action point at a moment when the touch sliding operation is stopped, the velocity and the deformation position of the action point after the touch sliding operation is stopped (Otsuka, ¶26: User actions such as a direction of finger swipe against a touchscreen, an acceleration/speed of finger swipe against the touchscreen, a length of finger swipe against the touchscreen, and/or a downward force exerted on the touchscreen, may determine the trajectory and/or momentum of the projectile; ¶27: the trajectory of a projectile is determined by an input made on a touchscreen, e.g., swiping a finger from a first point to a second point on the touchscreen. A velocity vector having a direction component and a speed component can be obtained from the direction and speed of the swipe, which can then be used to affect the trajectory of the projectile; ¶52: The touchscreen receives an input comprising a finger swipe, from which a swipe direction and a swipe speed are obtained (step 164). The swiping motion is detected by a sensing unit 810 (see FIG. 8) and is translated to a velocity vector by a controller 802 (see FIG. 8), which is used to determine the trajectory of the projectile)
Baba, Rosenberg and Otsuka are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, and further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, by using the technique for obtaining a force input vector to apply to a virtual object based on a touch and release of a touch screen as provided by Otsuka, using known electronic interfacing and programming techniques. One of ordinary skill in the art would have understood that the different force vectors can be attributed to a single action point on the mesh, as this is merely a mathematical computation of forces as applied as a total to a point, whether the force is derived from the internal forces of the object itself or from a particular user input that generates a force vector of both direction and magnitude, as exemplified by Korba, which teaches that a mesh node can be used to modify a virtual object based on the sum of the applied forces to the node itself, including elastic forces applied to the node and user input forces in the form of a gesture (Korba, ¶77: calculation of resisting force for node; ¶¶78-79: calculate net force acting on node by obtaining vector sum of all virtual forces acting on node; ¶81: vector sum of forces by all virtual springs connected to node; ¶83 discloses calculation of total force on node accounting for user force and sum of other forces, such as virtual springs; ¶135: touch screen receiving at least one touch through body part of user, e.g. finger). The modification of Baba and Rosenberg with the force vector generation based on the input of Otsuka merely substitutes one type of virtual force input for manipulating a virtual object on a touch screen for another, yielding predictable results of using touch and release on a touch screen to generate a force vector for control a physical response of a virtual object. Baba and Rosenberg already disclose that it is known that a movement or deformation of a mesh is a result of a number of forces applied to a virtual point, including user input force. Otsuka merely teaches that it is a known technique to obtain a force based on a user input such as a swipe on a touch screen, generating the force upon the end of the touch gesture. The modification results in an improved user interface by allowing for coordinated physical effects based on a different common touch screen user input (i.e. swipe) for a more interactive effect and more intuitive user interface response allowing interactive touch screen input for modifying an otherwise force-based interactive virtual object.
Regarding claim 16, Baba further discloses:
Determine, based on the deformation position (Baba, ¶75: after Step S106, when the non-contact determination unit 860 determines that the user has lifted the finger off the touch panel, as described above with reference to FIG. 8, the restored-object forming unit 870 contracts the elastic object that has been elastically deformed stepwise toward the start point in accordance with a restoring force of the elastic object.)
Rosenberg further discloses:
Determine, based on the deformation position and the velocity of the action point at the moment when the touch sliding operation is stopped together with a predetermined gravity at the action point, the velocity and the position of the action point after the touch sliding operation is stopped (Rosenberg, ¶36: define a specific location of the origin of the force vector within the virtual environment based on the position of an input on the touch sensor surface, such as the centroid of the area of an input or the location of the peak force measured within the area of the input, in Block S142, e.g., the computer system can map the origin of the force vector to a vertex, triangle, or point in a 3D mesh representing the virtual environment; ¶50 discloses user providing input in a rapid forward position and release of one or more inputs, such that computer system can generate a motion vector for the virtual baseball corresponding to the real trajectory of the input device in real space according to the motion vector upon release of the inputs; ¶56: virtual object deformed according to a virtual force vector and physical model including elastic deformation and inertial dynamics)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
The combination of references including Otsuka discloses the determining based on the velocity and the deformation position of the action point at a moment when the touch sliding operation is stopped, the velocity and the deformation position of the action point after the touch sliding operation is stopped comprises the recited limitation, including at the moment the touch sliding operation is stopped, as taught by Otsuka above for claim 25.
Baba, Rosenberg and Otsuka are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, and further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, by using the technique for obtaining a force input vector to apply to a virtual object based on a touch and release of a touch screen as provided by Otsuka, using known electronic interfacing and programming techniques. One of ordinary skill in the art would have understood that the different force vectors can be attributed to a single action point on the mesh, as this is merely a mathematical computation of forces as applied as a total to a point, whether the force is derived from the internal forces of the object itself or from a particular user input that generates a force vector of both direction and magnitude, as exemplified by Korba, which teaches that a mesh node can be used to modify a virtual object based on the sum of the applied forces to the node itself, including elastic forces applied to the node and user input forces in the form of a gesture (Korba, ¶77: calculation of resisting force for node; ¶¶78-79: calculate net force acting on node by obtaining vector sum of all virtual forces acting on node; ¶81: vector sum of forces by all virtual springs connected to node; ¶83 discloses calculation of total force on node accounting for user force and sum of other forces, such as virtual springs; ¶135: touch screen receiving at least one touch through body part of user, e.g. finger). The modification of Baba and Rosenberg with the force vector generation based on the input of Otsuka merely substitutes one type of virtual force input for manipulating a virtual object on a touch screen for another, yielding predictable results of using touch and release on a touch screen to generate a force vector for control a physical response of a virtual object. Baba and Rosenberg already disclose that it is known that a movement or deformation of a mesh is a result of a number of forces applied to a virtual point, including user input force. Otsuka merely teaches that it is a known technique to obtain a force based on a user input such as a swipe on a touch screen, generating the force upon the end of the touch gesture. The modification results in an improved user interface by allowing for coordinated physical effects based on a different common touch screen user input (i.e. swipe) for a more interactive effect and more intuitive user interface response allowing interactive touch screen input for modifying an otherwise force-based interactive virtual object.
Accordingly, Baba modified by Rosenberg teaches the concept of a deformable mesh object on a display device, wherein a force is determined based on user input, and the mesh is deformed based on the resulting force from the input. The only element missing is that generation of a force vector based on the touch sliding input.
Otsuka teaches:
Determining, based on the deformation position, the velocity and the force information of the action point at the moment when the touch sliding operation is stopped on the display device, the velocity and the position of the action point after the touch sliding operation is stopped (Otsuka, ¶26: User actions such as a direction of finger swipe against a touchscreen, an acceleration/speed of finger swipe against the touchscreen, a length of finger swipe against the touchscreen, and/or a downward force exerted on the touchscreen, may determine the trajectory and/or momentum of the projectile; ¶27: the trajectory of a projectile is determined by an input made on a touchscreen, e.g., swiping a finger from a first point to a second point on the touchscreen. A velocity vector having a direction component and a speed component can be obtained from the direction and speed of the swipe, which can then be used to affect the trajectory of the projectile; ¶52: The touchscreen receives an input comprising a finger swipe, from which a swipe direction and a swipe speed are obtained (step 164). The swiping motion is detected by a sensing unit 810 (see FIG. 8) and is translated to a velocity vector by a controller 802 (see FIG. 8), which is used to determine the trajectory of the projectile)
Baba, Rosenberg and Otsuka are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, and further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, by using the technique for obtaining a force input vector to apply to a virtual object based on a touch and release of a touch screen as provided by Otsuka, using known electronic interfacing and programming techniques. One of ordinary skill in the art would have understood that the different force vectors can be attributed to a single action point on the mesh, as this is merely a mathematical computation of forces as applied as a total to a point, whether the force is derived from the internal forces of the object itself or from a particular user input that generates a force vector of both direction and magnitude, as exemplified by Korba, which teaches that a mesh node can be used to modify a virtual object based on the sum of the applied forces to the node itself, including elastic forces applied to the node and user input forces in the form of a gesture (Korba, ¶77: calculation of resisting force for node; ¶¶78-79: calculate net force acting on node by obtaining vector sum of all virtual forces acting on node; ¶81: vector sum of forces by all virtual springs connected to node; ¶83 discloses calculation of total force on node accounting for user force and sum of other forces, such as virtual springs; ¶135: touch screen receiving at least one touch through body part of user, e.g. finger). The modification of Baba and Rosenberg with the force vector generation based on the input of Otsuka merely substitutes one type of virtual force input for manipulating a virtual object on a touch screen for another, yielding predictable results of using touch and release on a touch screen to generate a force vector for control a physical response of a virtual object. Baba and Rosenberg already disclose that it is known that a movement or deformation of a mesh is a result of a number of forces applied to a virtual point, including user input force. Otsuka merely teaches that it is a known technique to obtain a force based on a user input such as a swipe on a touch screen, generating the force upon the end of the touch gesture. The modification results in an improved user interface by allowing for coordinated physical effects based on a different common touch screen user input (i.e. swipe) for a more interactive effect and more intuitive user interface response allowing interactive touch screen input for modifying an otherwise force-based interactive virtual object.
Regarding claim 26, Baba further discloses:
Wherein the determining, based on the velocity and the deformation position of the action point at a moment when the touch sliding operation is stopped, the velocity and the deformation position of the action point after the touch sliding operation is stopped (rejected as in parent claim 23 set forth above) comprises:
determining, based on the deformation position of the action point at a moment when the touch sliding operation is stopped on the display device together with the elastic constraint between the action point and other mesh points, force information of the action point at the deformation position, and (Baba, ¶75: after Step S106, when the non-contact determination unit 860 determines that the user has lifted the finger off the touch panel, as described above with reference to FIG. 8, the restored-object forming unit 870 contracts the elastic object that has been elastically deformed stepwise toward the start point in accordance with a restoring force of the elastic object.)
Determining, based on the deformation position, (Baba, ¶74 discloses deformation when user continuously causes the finger to make a slide action up to end point 2 – see fig. 22; ¶85: after Step S106, when the non-contact determination unit 860 determines that the user has lifted the finger off the touch panel, as described above with reference to FIG. 8, the restored-object forming unit 870 contracts the elastic object that has been elastically deformed stepwise toward the start point in accordance with a restoring force of the elastic object, to thereby restore the initial shape illustrated in FIG. 6. – i.e. the velocity and deformation of the action point is determined after the touch sliding has stopped as the object is restored to the initial shape)
Otsuka teaches:
determining, based on the position, the velocity and the force information of the action point at the moment when the touch sliding operation is stopped on the display device, the velocity and the position of the action point after the touch sliding operation is stopped (Otsuka, ¶26: User actions such as a direction of finger swipe against a touchscreen, an acceleration/speed of finger swipe against the touchscreen, a length of finger swipe against the touchscreen, and/or a downward force exerted on the touchscreen, may determine the trajectory and/or momentum of the projectile; ¶27: the trajectory of a projectile is determined by an input made on a touchscreen, e.g., swiping a finger from a first point to a second point on the touchscreen. A velocity vector having a direction component and a speed component can be obtained from the direction and speed of the swipe, which can then be used to affect the trajectory of the projectile; ¶52: The touchscreen receives an input comprising a finger swipe, from which a swipe direction and a swipe speed are obtained (step 164). The swiping motion is detected by a sensing unit 810 (see FIG. 8) and is translated to a velocity vector by a controller 802 (see FIG. 8), which is used to determine the trajectory of the projectile)
Baba, Rosenberg and Otsuka are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, and further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, by using the technique for obtaining a force input vector to apply to a virtual object based on a touch and release of a touch screen as provided by Otsuka, using known electronic interfacing and programming techniques. One of ordinary skill in the art would have understood that the different force vectors can be attributed to a single action point on the mesh, as this is merely a mathematical computation of forces as applied as a total to a point, whether the force is derived from the internal forces of the object itself or from a particular user input that generates a force vector of both direction and magnitude, as exemplified by Korba, which teaches that a mesh node can be used to modify a virtual object based on the sum of the applied forces to the node itself, including elastic forces applied to the node and user input forces in the form of a gesture (Korba, ¶77: calculation of resisting force for node; ¶¶78-79: calculate net force acting on node by obtaining vector sum of all virtual forces acting on node; ¶81: vector sum of forces by all virtual springs connected to node; ¶83 discloses calculation of total force on node accounting for user force and sum of other forces, such as virtual springs; ¶135: touch screen receiving at least one touch through body part of user, e.g. finger). The modification of Baba and Rosenberg with the force vector generation based on the input of Otsuka merely substitutes one type of virtual force input for manipulating a virtual object on a touch screen for another, yielding predictable results of using touch and release on a touch screen to generate a force vector for control a physical response of a virtual object. Baba and Rosenberg already disclose that it is known that a movement or deformation of a mesh is a result of a number of forces applied to a virtual point, including user input force. Otsuka merely teaches that it is a known technique to obtain a force based on a user input such as a swipe on a touch screen, generating the force upon the end of the touch gesture. The modification results in an improved user interface by allowing for coordinated physical effects based on a different common touch screen user input (i.e. swipe) for a more interactive effect and more intuitive user interface response allowing interactive touch screen input for modifying an otherwise force-based interactive virtual object.
Regarding claim 18, Baba modified by Rosenberg further disclose:
determining, based on the deformation position, the velocity and the force information of the action point at a previous moment, the deformation position, the velocity and the force information of the action point at a moment, after the touch sliding operation is stopped (Rosenberg, ¶36: define a specific location of the origin of the force vector within the virtual environment based on the position of an input on the touch sensor surface, such as the centroid of the area of an input or the location of the peak force measured within the area of the input, in Block S142, e.g., the computer system can map the origin of the force vector to a vertex, triangle, or point in a 3D mesh representing the virtual environment; ¶50 discloses user providing input in a rapid forward position and release of one or more inputs, such that computer system can generate a motion vector for the virtual baseball corresponding to the real trajectory of the input device in real space according to the motion vector upon release of the inputs; ¶56: virtual object deformed according to a virtual force vector and physical model including elastic deformation and inertial dynamics)
Both Baba and Rosenberg are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by further incorporating the technique of utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, using known electronic interfacing and programming techniques. The modification results in an improved user interface for allowing interactive deformation or stretching of an object based on real-time user input by incorporating speed control for more realistic feedback and user interaction, while also allowing for a more entertaining and dynamical visual result.
Accordingly, Baba modified by Rosenberg teaches the concept of a deformable mesh object on a display device, wherein a force is determined based on user input, and the mesh is deformed based on the resulting force from the input. The only element missing is that generation of a force vector based on the touch sliding input.
Otsuka teaches:
Determining, based on the deformation position, the velocity and the force information of the action point at the moment when the touch sliding operation is stopped on the display device, the velocity and the position of the action point after the touch sliding operation is stopped (Otsuka, ¶26: User actions such as a direction of finger swipe against a touchscreen, an acceleration/speed of finger swipe against the touchscreen, a length of finger swipe against the touchscreen, and/or a downward force exerted on the touchscreen, may determine the trajectory and/or momentum of the projectile; ¶27: the trajectory of a projectile is determined by an input made on a touchscreen, e.g., swiping a finger from a first point to a second point on the touchscreen. A velocity vector having a direction component and a speed component can be obtained from the direction and speed of the swipe, which can then be used to affect the trajectory of the projectile; ¶52: The touchscreen receives an input comprising a finger swipe, from which a swipe direction and a swipe speed are obtained (step 164). The swiping motion is detected by a sensing unit 810 (see FIG. 8) and is translated to a velocity vector by a controller 802 (see FIG. 8), which is used to determine the trajectory of the projectile)
Baba, Rosenberg and Otsuka are directed to techniques and devices for interactive user interfaces for controlling the deformation of virtual objects. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for presenting and controlling the deformation of a mesh object based on user specified input as provided by Baba, by utilizing a velocity determination of input for additional simulated deformation of an object as provided by Rosenberg, and further performing a mesh simplification on the data for constructing a mesh for use in the system provided by Schroder, by using the technique for obtaining a force input vector to apply to a virtual object based on a touch and release of a touch screen as provided by Otsuka, using known electronic interfacing and programming techniques. One of ordinary skill in the art would have understood that the different force vectors can be attributed to a single action point on the mesh, as this is merely a mathematical computation of forces as applied as a total to a point, whether the force is derived from the internal forces of the object itself or from a particular user input that generates a force vector of both direction and magnitude, as exemplified by Korba, which teaches that a mesh node can be used to modify a virtual object based on the sum of the applied forces to the node itself, including elastic forces applied to the node and user input forces in the form of a gesture (Korba, ¶77: calculation of resisting force for node; ¶¶78-79: calculate net force acting on node by obtaining vector sum of all virtual forces acting on node; ¶81: vector sum of forces by all virtual springs connected to node; ¶83 discloses calculation of total force on node accounting for user force and sum of other forces, such as virtual springs; ¶135: touch screen receiving at least one touch through body part of user, e.g. finger). The modification of Baba and Rosenberg with the force vector generation based on the input of Otsuka merely substitutes one type of virtual force input for manipulating a virtual object on a touch screen for another, yielding predictable results of using touch and release on a touch screen to generate a force vector for control a physical response of a virtual object. Baba and Rosenberg already disclose that it is known that a movement or deformation of a mesh is a result of a number of forces applied to a virtual point, including user input force. Otsuka merely teaches that it is a known technique to obtain a force based on a user input such as a swipe on a touch screen, generating the force upon the end of the touch gesture. The modification results in an improved user interface by allowing for coordinated physical effects based on a different common touch screen user input (i.e. swipe) for a more interactive effect and more intuitive user interface response allowing interactive touch screen input for modifying an otherwise force-based interactive virtual object.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM A BEUTEL whose telephone number is (571)272-3132. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DANIEL HAJNIK can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILLIAM A BEUTEL/Primary Examiner, Art Unit 2616