DETAILED ACTION
Claims 1-20 filed May 2nd 2025 are pending in the current action.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 10-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of machine-readable media can encompass non-statutory transitory forms of signal transmission, such as a propagating electrical or electromagnetic signal per se. See In re Nuijten, 500 F.3d 1346, 84 USPQ2d 1495 (Fed. Cir. 2007). When the BRI encompasses transitory forms of signal transmission, a rejection under 35 U.S.C. 101 as failing to claim statutory subject matter would be appropriate. Thus, a claim to a computer readable medium that can be a compact disc or a carrier wave covers a non-statutory embodiment and therefore should be rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. See, e.g., Mentor Graphics v. EVE-USA, Inc., 851 F.3d at 1294-95, 112 USPQ2d at 1134 (claims to a "machine-readable medium" were non-statutory, because their scope encompassed both statutory random-access memory and non-statutory carrier waves).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8, 10-17, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (US2021/0089127) in view of Martin et al. (US2021/0208698)
Consider claim 1, where Wu teaches a method for providing direct touch interaction with virtual objects while a user holds a controller device of an artificial reality system, (See Wu Fig. 1C, 1D and ¶43 where a user holds a controller handle 50 with worn sensors 120 around the knuckles of the user’s fingers where the virtual fingers interact with virtual objects so that an actuator (not shown) may be placed in each of the second wearing portions 120 to provide individual tactile feedback feelings of different fingers of the user (i.e., only a finger interacting with a virtual object will feel the corresponding tactile feedback)) the method comprising: providing, by the artificial reality system and while a user's hand holds the controller device, a representation of the user's hand in an artificial reality environment; tracking that a portion of the user's hand is not in contact with the controller device; (See Wu Figs 2, 3, and ¶48-51 where the sensors 120 are worn around the k3 knuckles and with the bending degree of the third knuckle K3 of each finger, the bending degrees of the first knuckle K1 and the second knuckle K2 are estimated, and finally the gesture of each finger is obtained) and controlling, in the artificial reality environment while the user's hand continues to hold the controller device and based on the tracked portion of the user's hand not being in contact with the controller device, a representation of the portion of the user's hand that is not in contact with the controller device performing a direct touch interaction with a virtual object. (See Wu Fig. 1C, 1D and ¶43 where a user holds a controller handle 50 with worn sensors 120 around the knuckles of the user’s fingers where the virtual fingers interact with virtual objects so that an actuator (not shown) may be placed in each of the second wearing portions 120 to provide individual tactile feedback feelings of different fingers of the user (i.e., only a finger (specifically, the K1, K2 portions of the finger) interacting with a virtual object will feel the corresponding tactile feedback))
Wu teaches controlling; however, Wu does not explicitly teach illustrating. However, in an analogous field of endeavor Martin teaches illustrating. (See Martin ¶43-46 where the display device 110 enables presentation of information and/or virtual objects together with a viewer's view of the real world. In a virtual reality environment, the display device 110 presents a virtual scene that is completely rendered without combining aspects of the real-world environment. Beneficially, the pointing controller 120 enables a user to interact with the digital content in the virtual environment in a natural way. For example, a user may perform actions with objects in the three-dimensional space of the virtual environment such as pointing at virtual objects to select them, performing a pinching gesture to grab virtual objects, and moving virtual objects around in the virtual environment by motion of the hand) Thus, it would have been obvious for one of ordinary skill in the art that the virtual hand of Wu (See Wu ¶18)used for controlling objects could be rendered into a virtual scene as taught by Martin. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of recognizing that the virtual system taught in Wu is capable of rendering and illustrating a virtual scene as standard in the art.
Consider claim 2, where Wu in view of Martin teaches the method of claim 1, wherein the tracking that a portion of the user's hand is not in contact with the controller device is based on detecting a capacitance or infrared (IR) sensor change in a button of the controller device. (See Wu ¶53 where after entering a game, according to an interaction between the user and a virtual object, the computer will continuously record a real-time detection result (e.g., a magnetic value, a resistance value, a capacitance value, or other data) detected by each finger.)
Consider claim 3, where Wu in view of Martin teaches the method of claim 1, wherein the tracking that a portion of the user's hand is not in contact with the controller device is performed by applying computer vision, to one or more captured images depicting at least part of the hand of the user. (See Martin ¶67 where a camera 345 captures real-time video of the real-world environment within the view of the display device 110, thus simulating the view seen by the user. Image data from the camera may be combined with virtual objects or information to present an augmented reality view of the world.) Thus, it would have been obvious for one of ordinary skill in the art that the virtual hand of Wu (See Wu ¶18)used for controlling objects could be rendered into a virtual scene as taught by Martin. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of recognizing that the virtual system taught in Wu is capable of rendering and illustrating a virtual scene as standard in the art.
Consider claim 4, where Wu in view of Martin teaches the method of claim 1, wherein the tracking that the portion of the user's hand is not in contact with the controller device is performed by interpreting movement of the portion of the user's hand based on tracked movement of the controller device. (See Wu Figs 2, 3, and ¶48-51 where the sensors 120 are worn around the k3 knuckles and with the bending degree of the third knuckle K3 of each finger, the bending degrees of the first knuckle K1 and the second knuckle K2 are estimated, and finally the gesture of each finger is obtained)
Consider claim 5, where Wu in view of Martin teaches the method of claim 1, wherein the illustrating the representation of the portion of the user's hand, that is not in contact with the controller device, performing the direct touch interaction includes illustrating one or more fingers of the portion of the user's hand making a pointing gesture. (See Wu Figs 2, 3, and ¶48-51 where the sensors 120 are worn around the k3 knuckles and with the bending degree of the third knuckle K3 of each finger, the bending degrees of the first knuckle K1 and the second knuckle K2 are estimated, and finally the gesture of each finger is obtained. Additionally, see Martin Figs. 6A-E and abstract where pointing gestures are discussed and presented)
Consider claim 6, where Wu in view of Martin teaches the method of claim 1, wherein the tracking that a portion of the user's hand is not in contact with the controller device includes detecting an angle of a finger of the user in relation to the user's hand; and wherein the illustrating the representation of the portion of the user's hand, that is not in contact with the controller device, performing the direct touch interaction includes showing the representation of the user's hand with a finger at the detected angle. (See Wu ¶57-59 where based on the foregoing, when the finger-gesture detection device of the present disclosure is worn on a hand of a user, the position of the first sensor corresponds to a third knuckle of a finger of the user. Therefore, the first sensor may detect the gesture of the third knuckle of each finger, and derive the gesture of each finger by estimating a finger joint motion. By integrating the finger-gesture detection device of the present disclosure and the control handle, the bending degree and gesture of each finger can be detected. As a result, the user can perform more complicated input functions in VR, which in turn brings out diversified VR interactions and allows the user to play games in a more natural way. The correction method can establish different detection ranges for different hand types or different grip modes of different users, thereby more accurately mapping a finger gesture in VR. Thus, mapping the bending degree to the virtual finger more accurately)
Consider claim 7, where Wu in view of Martin teaches the method of claim 1, wherein the tracking that a portion of the user's hand is not in contact with the controller device includes detecting that a user is not touching at least a particular button of the controller device; (See Wu Fig 5a and ¶50-51 where a second sensing component 140a that measures the distance between the finger from the second sensing component to calculate the degree of bending, thus being able to sense when the finger is in contact (distance of zero) or not in contact (distance greater than zero)) and wherein the illustrating the representation, of the portion of the user's hand that is not in contact with the controller device, performing the direct touch interaction includes showing the representation of the user's hand with a finger at a predetermined angle to the representation of the user's hand. (See Wu ¶57-59 where based on the foregoing, when the finger-gesture detection device of the present disclosure is worn on a hand of a user, the position of the first sensor corresponds to a third knuckle of a finger of the user. Therefore, the first sensor may detect the gesture of the third knuckle of each finger, and derive the gesture of each finger by estimating a finger joint motion. By integrating the finger-gesture detection device of the present disclosure and the control handle, the bending degree and gesture of each finger can be detected. As a result, the user can perform more complicated input functions in VR, which in turn brings out diversified VR interactions and allows the user to play games in a more natural way. The correction method can establish different detection ranges for different hand types or different grip modes of different users, thereby more accurately mapping a finger gesture in VR. Thus, mapping the bending degree to the virtual finger more accurately)
Consider claim 8, where Wu in view of Martin teaches the method of claim 1, wherein the providing the representation of the user's hand includes showing a representation of the controller device as being held by the representation of the user's hand. (See Martin Figs. 6A-E and ¶43-46 where the display device 110 enables presentation of information and/or virtual objects together with a viewer's view of the real world. In a virtual reality environment, the display device 110 presents a virtual scene that is completely rendered without combining aspects of the real-world environment. Beneficially, the pointing controller 120 enables a user to interact with the digital content in the virtual environment in a natural way. For example, a user may perform actions with objects in the three-dimensional space of the virtual environment such as pointing at virtual objects to select them, performing a pinching gesture to grab virtual objects, and moving virtual objects around in the virtual environment by motion of the hand) Thus, it would have been obvious for one of ordinary skill in the art that the virtual hand of Wu (See Wu ¶18)used for controlling objects could be rendered into a virtual scene as taught by Martin. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of recognizing that the virtual system taught in Wu is capable of rendering and illustrating a virtual scene as standard in the art.
Consider claim 10, where Wu teaches a computer-readable storage medium storing instructions, for providing direct touch interaction with virtual objects while a user holds a controller device of an artificial reality system, the instructions, when executed by a computing system, (See Wu Fig. 1C, 1D and ¶43 where a user holds a controller handle 50 with worn sensors 120 around the knuckles of the user’s fingers where the virtual fingers interact with virtual objects so that an actuator (not shown) may be placed in each of the second wearing portions 120 to provide individual tactile feedback feelings of different fingers of the user (i.e., only a finger interacting with a virtual object will feel the corresponding tactile feedback)) cause the computing system to: provide, by the artificial reality system and while a user's hand holds the controller device, a representation of the user's hand in an artificial reality environment; track that a portion of the user's hand is not in contact with the controller device; (See Wu Figs 2, 3, and ¶48-51 where the sensors 120 are worn around the k3 knuckles and with the bending degree of the third knuckle K3 of each finger, the bending degrees of the first knuckle K1 and the second knuckle K2 are estimated, and finally the gesture of each finger is obtained) and control, in the artificial reality environment while the user's hand continues to hold the controller device and based on the tracked portion of the user's hand not being in contact with the controller device, a representation of the portion of the user's hand that is not in contact with the controller device performing a direct touch interaction with a virtual object. (See Wu Fig. 1C, 1D and ¶43 where a user holds a controller handle 50 with worn sensors 120 around the knuckles of the user’s fingers where the virtual fingers interact with virtual objects so that an actuator (not shown) may be placed in each of the second wearing portions 120 to provide individual tactile feedback feelings of different fingers of the user (i.e., only a finger (specifically, the K1, K2 portions of the finger) interacting with a virtual object will feel the corresponding tactile feedback))
Wu teaches controlling; however, Wu does not explicitly teach illustrating. However, in an analogous field of endeavor Martin teaches illustrating. (See Martin ¶43-46 where the display device 110 enables presentation of information and/or virtual objects together with a viewer's view of the real world. In a virtual reality environment, the display device 110 presents a virtual scene that is completely rendered without combining aspects of the real-world environment. Beneficially, the pointing controller 120 enables a user to interact with the digital content in the virtual environment in a natural way. For example, a user may perform actions with objects in the three-dimensional space of the virtual environment such as pointing at virtual objects to select them, performing a pinching gesture to grab virtual objects, and moving virtual objects around in the virtual environment by motion of the hand) Thus, it would have been obvious for one of ordinary skill in the art that the virtual hand of Wu (See Wu ¶18)used for controlling objects could be rendered into a virtual scene as taught by Martin. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of recognizing that the virtual system taught in Wu is capable of rendering and illustrating a virtual scene as standard in the art.
Consider claim 11, where Wu in view of Martin teaches the computer-readable storage medium of claim 10, wherein the tracking that a portion of the user's hand is not in contact with the controller device is based on detecting a capacitance or infrared (IR) sensor change in a button of the controller device. (See Wu ¶53 where after entering a game, according to an interaction between the user and a virtual object, the computer will continuously record a real-time detection result (e.g., a magnetic value, a resistance value, a capacitance value, or other data) detected by each finger.)
Consider claim 12, where Wu in view of Martin teaches the computer-readable storage medium of claim 10, wherein the tracking that a portion of the user's hand is not in contact with the controller device is performed by applying computer vision, to one or more captured images depicting at least part of the hand of the user. (See Martin ¶67 where a camera 345 captures real-time video of the real-world environment within the view of the display device 110, thus simulating the view seen by the user. Image data from the camera may be combined with virtual objects or information to present an augmented reality view of the world.) Thus, it would have been obvious for one of ordinary skill in the art that the virtual hand of Wu (See Wu ¶18)used for controlling objects could be rendered into a virtual scene as taught by Martin. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of recognizing that the virtual system taught in Wu is capable of rendering and illustrating a virtual scene as standard in the art.
Consider claim 13, where Wu in view of Martin teaches the computer-readable storage medium of claim 10, wherein the tracking that the portion of the user's hand is not in contact with the controller device is performed by interpreting movement of the portion of the user's hand based on tracked movement of the controller device. (See Wu Figs 2, 3, and ¶48-51 where the sensors 120 are worn around the k3 knuckles and with the bending degree of the third knuckle K3 of each finger, the bending degrees of the first knuckle K1 and the second knuckle K2 are estimated, and finally the gesture of each finger is obtained)
Consider claim 14, where Wu in view of Martin teaches the computer-readable storage medium of claim 10, wherein the illustrating the representation of the portion of the user's hand, that is not in contact with the controller device, performing the direct touch interaction includes illustrating one or more fingers of the portion of the user's hand making a pointing gesture. (See Wu Figs 2, 3, and ¶48-51 where the sensors 120 are worn around the k3 knuckles and with the bending degree of the third knuckle K3 of each finger, the bending degrees of the first knuckle K1 and the second knuckle K2 are estimated, and finally the gesture of each finger is obtained. Additionally, see Martin Figs. 6A-E and abstract where pointing gestures are discussed and presented)
Consider claim 15, where Wu in view of Martin teaches the computer-readable storage medium of claim 10, wherein the tracking that a portion of the user's hand is not in contact with the controller device includes detecting an angle of a finger of the user in relation to the user's hand; and wherein the illustrating the representation of the portion of the user's hand, that is not in contact with the controller device, performing the direct touch interaction includes showing the representation of the user's hand with a finger at the detected angle. (See Wu ¶57-59 where based on the foregoing, when the finger-gesture detection device of the present disclosure is worn on a hand of a user, the position of the first sensor corresponds to a third knuckle of a finger of the user. Therefore, the first sensor may detect the gesture of the third knuckle of each finger, and derive the gesture of each finger by estimating a finger joint motion. By integrating the finger-gesture detection device of the present disclosure and the control handle, the bending degree and gesture of each finger can be detected. As a result, the user can perform more complicated input functions in VR, which in turn brings out diversified VR interactions and allows the user to play games in a more natural way. The correction method can establish different detection ranges for different hand types or different grip modes of different users, thereby more accurately mapping a finger gesture in VR. Thus, mapping the bending degree to the virtual finger more accurately)
Consider claim 16, where Wu in view of Martin teaches the computer-readable storage medium of claim 10, wherein the tracking that a portion of the user's hand is not in contact with the controller device includes detecting that a user is not touching at least a particular button of the controller device; (See Wu Fig 5a and ¶50-51 where a second sensing component 140a that measures the distance between the finger from the second sensing component to calculate the degree of bending, thus being able to sense when the finger is in contact (distance of zero) or not in contact (distance greater than zero)) and wherein the illustrating the representation, of the portion of the user's hand that is not in contact with the controller device, performing the direct touch interaction includes showing the representation of the user's hand with a finger at a predetermined angle to the representation of the user's hand. (See Wu ¶57-59 where based on the foregoing, when the finger-gesture detection device of the present disclosure is worn on a hand of a user, the position of the first sensor corresponds to a third knuckle of a finger of the user. Therefore, the first sensor may detect the gesture of the third knuckle of each finger, and derive the gesture of each finger by estimating a finger joint motion. By integrating the finger-gesture detection device of the present disclosure and the control handle, the bending degree and gesture of each finger can be detected. As a result, the user can perform more complicated input functions in VR, which in turn brings out diversified VR interactions and allows the user to play games in a more natural way. The correction method can establish different detection ranges for different hand types or different grip modes of different users, thereby more accurately mapping a finger gesture in VR. Thus, mapping the bending degree to the virtual finger more accurately)
Consider claim 17, where Wu in view of Martin teaches the computer-readable storage medium of claim 10, wherein the providing the representation of the user's hand includes showing a representation of the controller device as being held by the representation of the user's hand. (See Martin Figs. 6A-E and ¶43-46 where the display device 110 enables presentation of information and/or virtual objects together with a viewer's view of the real world. In a virtual reality environment, the display device 110 presents a virtual scene that is completely rendered without combining aspects of the real-world environment. Beneficially, the pointing controller 120 enables a user to interact with the digital content in the virtual environment in a natural way. For example, a user may perform actions with objects in the three-dimensional space of the virtual environment such as pointing at virtual objects to select them, performing a pinching gesture to grab virtual objects, and moving virtual objects around in the virtual environment by motion of the hand) Thus, it would have been obvious for one of ordinary skill in the art that the virtual hand of Wu (See Wu ¶18)used for controlling objects could be rendered into a virtual scene as taught by Martin. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of recognizing that the virtual system taught in Wu is capable of rendering and illustrating a virtual scene as standard in the art.
Consider claim 19, where Wu teaches a computing system, for providing direct touch interaction with virtual objects while a user holds a controller device of an artificial reality system, the computing system comprising: one or more processors; and one or more memories storing instructions that, when executed by the one or more processors, (See Wu Fig. 1C, 1D and ¶43 where a user holds a controller handle 50 with worn sensors 120 around the knuckles of the user’s fingers where the virtual fingers interact with virtual objects so that an actuator (not shown) may be placed in each of the second wearing portions 120 to provide individual tactile feedback feelings of different fingers of the user (i.e., only a finger interacting with a virtual object will feel the corresponding tactile feedback)) cause the computing system to: provide, by the artificial reality system and while a user's hand holds the controller device, a representation of the user's hand in an artificial reality environment; track that a portion of the user's hand is not in contact with the controller device; (See Wu Figs 2, 3, and ¶48-51 where the sensors 120 are worn around the k3 knuckles and with the bending degree of the third knuckle K3 of each finger, the bending degrees of the first knuckle K1 and the second knuckle K2 are estimated, and finally the gesture of each finger is obtained) and control, in the artificial reality environment while the user's hand continues to hold the controller device and based on the tracked portion of the user's hand not being in contact with the controller device, a representation of the portion of the user's hand that is not in contact with the controller device performing a direct touch interaction with a virtual object. (See Wu Fig. 1C, 1D and ¶43 where a user holds a controller handle 50 with worn sensors 120 around the knuckles of the user’s fingers where the virtual fingers interact with virtual objects so that an actuator (not shown) may be placed in each of the second wearing portions 120 to provide individual tactile feedback feelings of different fingers of the user (i.e., only a finger (specifically, the K1, K2 portions of the finger) interacting with a virtual object will feel the corresponding tactile feedback))
Wu teaches controlling; however, Wu does not explicitly teach illustrating. However, in an analogous field of endeavor Martin teaches illustrating. (See Martin ¶43-46 where the display device 110 enables presentation of information and/or virtual objects together with a viewer's view of the real world. In a virtual reality environment, the display device 110 presents a virtual scene that is completely rendered without combining aspects of the real-world environment. Beneficially, the pointing controller 120 enables a user to interact with the digital content in the virtual environment in a natural way. For example, a user may perform actions with objects in the three-dimensional space of the virtual environment such as pointing at virtual objects to select them, performing a pinching gesture to grab virtual objects, and moving virtual objects around in the virtual environment by motion of the hand) Thus, it would have been obvious for one of ordinary skill in the art that the virtual hand of Wu (See Wu ¶18)used for controlling objects could be rendered into a virtual scene as taught by Martin. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of recognizing that the virtual system taught in Wu is capable of rendering and illustrating a virtual scene as standard in the art.
Consider claim 20, where Wu in view of Martin teaches the computing system of claim 19, wherein the tracking that a portion of the user's hand is not in contact with the controller device is based on detecting a capacitance or infrared (IR) sensor change in a button of the controller device. (See Wu ¶53 where after entering a game, according to an interaction between the user and a virtual object, the computer will continuously record a real-time detection result (e.g., a magnetic value, a resistance value, a capacitance value, or other data) detected by each finger.)
Allowable Subject Matter
Claims 9, 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: Claim 9 recites the limitation "in response to detecting that the tracked movement moved the user's hand within a threshold distance of an object configured for direct touch interaction, hiding the representation of the controller." While this limitation can be found in Kim et al. (US2020/0159337) where Fig. 10-13 and ¶132-143 teaches the unification of the user's hand's virtual representation and the controller's virtual representation. However, it would have been non-obvious to implement the integration of the hand and controller specifically when within threshold distance of touching a virtual object as claimed.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM LU whose telephone number is (571)270-1809. The examiner can normally be reached 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
WILLIAM LU
Primary Examiner
Art Unit 2624
/WILLIAM LU/Primary Examiner, Art Unit 2624