DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is in response to the applicant’s reply filed October 27, 2025. In the applicant’s reply; claims 1-2, and 13-14 were amended, claims 8, 10-11, 20, and 22-23 were cancelled. Claims 1-7, 9, 12, 15-19, 21 and 24 are pending in this application.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Examiner’s Responses to Applicant’s Remark
Applicants' amendments filed on October 27, 2025 have been fully considered. The amendments overcome the following rejections set forth in the office action mailed on June 24, 2025.
Applicant’s amendments overcome the objection to the title of the specification, and the objection is hereby withdrawn.
Applicant’s amendments overcome the rejections of Claim 1 under 35 U.S.C. 102(a)(2) as being anticipated by Lavie et al. (US Patent US11915479B1, originally filed on December 30, 2020 with provisional priority to December 30, 2019, hereby referred to as “Lavie”), and the rejection is hereby withdrawn.
Claims 1-4, 7, 9, 12-16, 19, 21 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Lavie et al. (US Patent US11915479B1, originally filed on December 30, 2020 with provisional priority to December 30, 2019, hereby referred to as “Lavie”), in view of Greenblatt et al. (US PGPub US 2020/0128899A1, originally filed on October 31, 2018), hereby referred to as “Greenblatt”
Applicant's arguments with respect to the pending claims have been considered but are moot in view of the new grounds of rejection, presented below and necessitated by the applicant’s amendments.
Applicants' arguments filed on October 27, 2025 have been fully considered but they are not persuasive. The Examiner has thoroughly reviewed Applicants' arguments but firmly believes that the cited reference to reasonably and properly meet the claimed limitation.
Applicant argues that Greenblatt does not teach the amended features for the “use of video images to track changes in motion, velocity, or acceleration to calculate impact force as it happens” in real-time, and rather that Greenblatt teaches the “assessment of neurological injury after the fact” using data from “questions and responses after the injury has occurred, combined with the reading from wearable sensors”.
Examiner respectfully disagrees. Examiner has cited particular columns and line numbers or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Greenblatt clearly teaches the use of using video data of player interactions as well as wearable sensors that measure acceleration and force real-time, as presented in [0017] and further teaches that the cognitive engine relies on collision-based learning metrics including force/acceleration data that exceeds a threshold value as presented in [0023.
[0017] In example embodiments, the sensors also include sensors 108 disposed within the environment 102 but which are not integrated with or otherwise disposed on players' equipment. Such sensors may include, for example, image sensors to capture still images and/or video of player interactions on the field. The sensors 108 in the environment 102 may also include microphones to capture audio of player interactions on the field. In example embodiments, the sensors 108 further include vibration sensors, accelerometers, Global Positioning System (GPS) receivers, or the like to capture additional forms of sensor data relating to player interactions.
[0023] For example, the cognitive engine 110 may learn the types of collisions that have resulted in neurological injury in the past such as collisions that involve players between whom there is a significant (e.g., above a threshold value) deviation in weight or height; collisions that produce force/acceleration data that exceeds a threshold value; collisions that involve particular player positions or particular types of plays (e.g., a safety in American football tackling a receiver coming over the middle of the field); and so forth. Further, in certain example embodiments, ground-truth data including video data, image data, inertial sensor data, or the like relating to player interactions that were ultimately determined to have resulted in neurological injury but which were not detected on the field can also be input to the cognitive engine 110. The cognitive engine 110 can attempt to learn patterns from this data that can be used to avoid failing to detect subsequent incidents of neurological injury in similar situational circumstances.
Applicant is reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. So the Examiner considers Greenblatt’s measures of force and acceleration data using wearable sensors from the collisions that is then used in the cognitive engine machine learning algorithm in conjunction with threshold values to be Applicants' “receiving the video images within a computer processor and executing a learning algorithm for tracking within the video images one or more of motion, velocity, and acceleration at one or more points on a surface of the living body within the video images; calculating in substantially real time a force associated with physical impact upon the surface at each of the one or more points” within the broad meaning of the term. Furthermore, the Examiner is not limited to Applicants' definition which is not specifically set forth in the claims. In re Tanaka et al., 193 USPQ 139, (CCPA) 1977.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-7, 9, 12, 15-19, 21 and 24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent claims 1 and 13 were amended to now recite the following features: “at least a portion of each image frame including the living body moving within a defined area” and “calculate in substantially real time a force associated with physical impact upon the surface at each of the one or more points”, which are not sufficiently described. .The terms “portion”, “moving”, “substantially” and “surface” are not recited in the written disclosure, and as such are not adequately supported by the written description. These features were not elected by original presentation, and as such constitute subject matter that is not supported by the written description. Further clarification is needed for examiner to fully consider the amended features with respect to the prior art. For purposes of examination, the features will be considered with respect to the plain meaning of the features that are supported.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-7, 9, 12, 15-19, 21 and 24 are rejected for being indefinite. Independent claims 1 and 13 were amended to now recite the following features: “at least a portion of each image frame including the living body moving within a defined area” and “calculate in substantially real time a force associated with physical impact upon the surface at each of the one or more points”, which are not sufficiently described. The term “substantially real time” in independent claims 1 and 13 is a relative term which renders the claim indefinite. Additionally, the amended features also include “the living body moving within a defined area” is not defined by the claim. The specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Further clarification is needed for examiner to fully consider the amended features with respect to the prior art. For purposes of examination, the features will be considered with respect to the plain meaning of the features that are supported.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1 and 13 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Greenblatt et al. (US PGPub US 2020/0128899A1, originally filed on October 31, 2018), hereby referred to as “Greenblatt”.
Consider Claims 1 and 13.
Greenblatt teaches:
1. A system for monitoring physical impact on a living body in substantially real time, the system comprising: / 13. A method for monitoring physical impact on a living body in substantially real time, the system comprising: (Greenblatt: abstract, Systems, methods, and computer-readable media are described for predicting a neurological injury to a participant in an activity. The activity can be, for example, an athletic activity that involves repeated, high-impact collisions between participants. Sensor data reflecting interactions between participants in the activity is received from various wearable and non-wearable sensors. The sensor data is evaluated in conjunction with a baseline neurological risk profile of a participant to determine a likelihood that the participant has suffered a potential neurological injury. If this likelihood meets a threshold risk level, an onsite request/response test is initiated to glean more information relating to the participant's condition. Response data associated with the onsite test is cognitively evaluated to determine an updated likelihood of neurological injury to the participant and a follow-up action is determined based on the updated likelihood of neurological injury. [0014]-[0036], Figures 1, 2A-B, 3, [0014] FIG. 1 is a schematic hybrid data flow/block diagram illustrating smart prediction of neurological injury in accordance with one or more example embodiments. FIGS. 2A and 2B are process flow diagrams of an illustrative method 200 for predicting a neurological injury to a participant in an activity in accordance with one or more example embodiments. Each of FIGS. 2A and 2B will be described in conjunction with FIG. 1 hereinafter.)
1. at least one camera configured to collect video images comprising a plurality of image frames at least a portion of each image frame including of the living body moving within a defined area, the video images comprising a sequence of high resolution data at a high frame rate; / 13. collecting video images comprising a plurality of image frames of the living body moving within a defined area, the video images comprising a sequence of high resolution data at a high frame rate; (Greenblatt: [0014] FIG. 1 is a schematic hybrid data flow/block diagram illustrating smart prediction of neurological injury in accordance with one or more example embodiments. FIGS. 2A and 2B are process flow diagrams of an illustrative method 200 for predicting a neurological injury to a participant in an activity in accordance with one or more example embodiments. Each of FIGS. 2A and 2B will be described in conjunction with FIG. 1 hereinafter. [0015] Referring first to FIG. 1, an environment 102 is depicted in which an activity involving multiple participants is occurring. In example embodiments, the activity is an athletic activity such as a sporting contest that by its nature involves repeated physical contact between participants, where such contact often includes high-impact collisions with significant amounts of force. In those example embodiments in which the activity is a sporting activity, the environment 102 may be a field, arena, stadium, or any other venue in which such an activity may take place. For ease of explanation, example embodiments of the invention will be described hereinafter with respect the example activity of an American football game. [0017] In example embodiments, the sensors also include sensors 108 disposed within the environment 102 but which are not integrated with or otherwise disposed on players' equipment. Such sensors may include, for example, image sensors to capture still images and/or video of player interactions on the field. The sensors 108 in the environment 102 may also include microphones to capture audio of player interactions on the field. In example embodiments, the sensors 108 further include vibration sensors, accelerometers, Global Positioning System (GPS) receivers, or the like to capture additional forms of sensor data relating to player interactions. )
1. a computer processor configured to receive the video images and execute a learning algorithm for tracking within the video images one or more of motion, velocity, and acceleration at one or more points on a surface of the living body and calculate in substantially real time a force associated with physical impact upon the surface at each of to the one or more points; / 13. receiving the video images within a computer processor and executing a learning algorithm for tracking within the video images one or more of motion, velocity, and acceleration at one or more points on a surface of the living body within the video images; calculating in substantially real time a force associated with physical impact upon the surface at each of the one or more points; (Greenblatt: [0022] In certain example embodiments, the cognitive engine 110 is a machine learning construct such as a type of neural network (e.g., a convolutional neural network) that is capable of being trained based on ground-truth data to more accurately determine the likelihood that a player has sustained a neurological injury. The ground-truth data may include, for example, image data, video data, and/or other forms of sensor data (e.g., force/acceleration data) known to be associated with the occurrence of neurological injury. The cognitive engine 110 may be trained to learn patterns from such historical ground-truth data using, for example, a computer vision-based approach. [0023] For example, the cognitive engine 110 may learn the types of collisions that have resulted in neurological injury in the past such as collisions that involve players between whom there is a significant (e.g., above a threshold value) deviation in weight or height; collisions that produce force/acceleration data that exceeds a threshold value; collisions that involve particular player positions or particular types of plays (e.g., a safety in American football tackling a receiver coming over the middle of the field); and so forth. Further, in certain example embodiments, ground-truth data including video data, image data, inertial sensor data, or the like relating to player interactions that were ultimately determined to have resulted in neurological injury but which were not detected on the field can also be input to the cognitive engine 110. The cognitive engine 110 can attempt to learn patterns from this data that can be used to avoid failing to detect subsequent incidents of neurological injury in similar situational circumstances.)
1. and a high-speed interface configured to communicate the video images to the computer processors wherein the computer processor is further configured to compare the calculated force to a predetermined threshold corresponding to an impact associated with a potential injury risk. / 13. and comparing the calculated force to a predetermined threshold corresponding to an impact associated with a potential injury risk. (Greenblatt: [0024] In example embodiments, the cognitive engine 110 performs the cognitive risk analysis at block 206 in real-time such that a determination can be made dynamically as to whether to initiate an onsite test for potential neurological injury to a player. More specifically, at block 208 of the method 200, in example embodiments, the cognitive engine 110 determines whether the likelihood of potential neurological injury to the player of interest determined based on the cognitive analysis performed at block 206 satisfies a threshold risk level. As used herein, and depending on the implementation, a first value satisfies a second value (e.g., a threshold value) if the first value is strictly greater than the second value; greater than or equal to the second value; strictly less than the second value; or less than or equal to the second value. [0025] In response to a negative determination at block 208 indicating that the likelihood of potential neurological injury to the player of interest does not meet the threshold risk level, the method 200 returns to block 202 where additional sensor data 112 is received and the cognitive analysis is again performed on this additional sensor data 112. In this manner, in example embodiments, player interactions are continually monitored and sensor data relating thereto is captured and cognitively analyzed on a continual basis throughout a sporting event until a determination is made that the likelihood of potential neurological injury to a player meets the threshold amount of risk for initiating the onsite testing. It should be appreciated that the cognitive analysis performed at block 206 can be performed with respect to different players at different iterations of the method (e.g., can be performed with respect to a first player involved in a collision and a second different player involved in a later collision) and/or can be performed in parallel for multiple players at any given iteration of the method 200. [0026] In response to a positive determination at block 208, the cognitive engine 110 may send an initiation signal 118 to an onsite testing engine 120 to initiate, at block 212 of the method 200, an onsite test to attempt to determine with more certainty whether a player of interest has suffered a neurological injury. In example embodiments, the onsite test may be request/response protocol designed to glean more information as to whether a player has suffered a neurological injury. In certain example embodiments, prior to initiating the onsite testing, the cognitive engine 110 (or the onsite testing engine 120) determines whether a threshold number of iterations has been reached for the onsite testing. In example embodiments, the onsite testing engine 120 only proceeds with the onsite testing if the threshold number of iterations has not been reached.)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1-4, 7, 9, 12-16, 19, 21 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Devassy et al. (US PGPub US20220129669A1, hereby referred to as “Devassy”), in view of Greenblatt et al. (US PGPub US 2020/0128899A1, originally filed on October 31, 2018), hereby referred to as “Greenblatt”.
Consider Claim 1 and 13.
Devassy teaches:
1. A system for monitoring physical impact on a living body in substantially real time, the system comprising: / 13. A method for monitoring physical impact on a living body in substantially real time, the system comprising: (Devassy: abstract; A system and method for providing multi-camera 3D body part labeling and performance metrics includes receiving 2D image data and 3D depth data from a plurality image capture units (ICUs) each indicative of a scene viewed by the ICUs, the scene having at least one person, each ICU viewing the person from a different viewing position, determining 3D location data and visibility confidence level for the body parts from each ICU, using the 2D image data and the 3D depth data from each ICU, transforming the 3D location data for the body parts from each ICU to a common reference frame for body parts having at least a predetermined visibility confidence level, averaging the transformed, visible 3D body part locations from each ICU, and determining a performance metric of at least one of the body parts using the averaged 3D body part locations. The person may be a player in a sports scene. [0027]-[0031])
1. at least one camera configured to collect video images comprising a plurality of image frames at least a portion of each image frame including of the living body moving within a defined area, the video images comprising a sequence of high resolution data at a high frame rate; / 13. collecting video images comprising a plurality of image frames of the living body moving within a defined area, the video images comprising a sequence of high resolution data at a high frame rate; (Devassy: [0031] The image capture units (ICUs) may be a combination of a video camera which provides 2D digital image and a 3D depth-sensing device/sensors that can accurately measure the XYZ distance, both viewing the same location. It produces a digital output data for each pixel in the image having both 2D image data (RGB) and 3D XYZ coordinate data (see FIGS. 8A and 8B). [0037] Referring to FIG. 1, an aerial view 100 of a boxing ring 102 is provided with two boxers, 104, 106, and four 2D/3D video cameras (or image capture units or ICUs) 110A,110B, 110C, 110D (collectively, the cameras or ICUs 110), each ICU having a respective field of view 112A, 112B, 112C, 112D (collectively, field of view 112A) of the ring 102. The Image Capture Units (ICUs) 110 may be any 2D/3D video cameras, e.g., an Intel® RealSense™ camera, made by Intel Corp., such as Intel® RealSense™ Depth Camera SR305 (2016) or Intel® RealSense™ LiDAR Camera L515 (2019), or the like, that provide the functions and performance described herein.)
1. a computer processor configured to receive the video images and execute a learning algorithm for tracking within the video images one or more of motion, velocity, and acceleration at one or more points on a surface of the living body and calculate in substantially real time metrics associated with one or more points / 13. receiving the video images within a computer processor and executing a learning algorithm for tracking within the video images one or more of motion, velocity, and acceleration at one or more points on a surface of the living body within the video images; calculating in substantially real time metrics associated with one or more points (Devassy: [0039] FIG. 2 illustrates various components (or devices or logic) of a system and method 200 for multi-camera, 3D body part labeling and performance metrics calculation, including Semantic Labeling & Metrics Logic 202, which includes ICU/Player ID Logic 204, Pose Estimation Model Logic 206, Centroid & Visibility/Occlusion Logic 208, Transform/Averaging & Player Tracking Logic 210, and Metrics Calculation Logic 212. The Player ID Logic 204 receives digital data from the 2D/3D Image Capture Units (ICU1-ICU4) 110A-110D, where each pixel in each image from each ICU may have a format of X,Y,Z; R,G,B, as described herein. The Player ID Logic 204 may also store the data for each pixel in the image frame received from each ICU onto an ICU/Player ID Server 214, e.g., in a 2D (row(i),column(j)), e.g., 480 rows×640 columns of pixels, table or matrix format, where each cell in the table represents a pixel, such as pixels 860 shown in FIG. 8A (discussed more hereinafter). In some embodiments, the ICUs may store the 2D/3D image frame data (XYZ; RGB) directly on the ICU/Player ID Server 214. [0040] The Player ID Logic 204 uses the 2D RGB data from the image and known machine learning models/tools, such as YOLO/Mask R-CNN (discussed hereinafter) or the like, which are trained on a significant amount of the 2D-labeled data to identify people and objects, to provide people pixel boxes (or “bounding boxes”, e.g., from YOLO) in the image frame indicative of where people are located in the image, as well as regions or areas or masks for the entire body (or full-body cutout), including body part areas such as the torso and head, which may be provided as a collection or cluster or set of pixels (e.g., from Mask R-CNN), which are not labeled as specific body parts. The Player ID Logic 204 organizes the people pixel boxes from smallest to largest and identifies the two largest boxes (closest people or Players to the camera), such as the people pixel boxes 705A, 705B shown in FIGS. 7 and 8, and assigns them to Player A and Player B based on corresponding predetermined pixel pattern descriptors (or identifiers or feature vectors) for Player A and Player B stored on the server 214 (as discussed herein regarding camera calibration and setup), for each ICU , discussed more hereinafter. It does this for each of the ICUs 110A-110D. Thus, the Player ID Logic 204 provides Player A and Player B for ICU1 (PA1, PB1), Player A and Player B for ICU2 (PA2, PB2), Player A and Player B for ICU3 (PA3, PB3), and Player A and Player B for ICU4 (PA4, PB4). The people boxes include both the RGB image data as well as the XYZ 3D depth data for each pixel for each player from each ICU. Thus, the Player ID Logic 204 assigns player identities consistently and accurately across multiple ICU units, with the same physical player being denoted A (or B) accordingly, regardless of ICU viewpoint.[0041], [0068] Referring back to FIG. 2, the Metrics Calculation Logic 212 receives the tracked PA: 14 BPLs (X′avg, Y′avg, Z′avg) and tracked PB: 14 BPLs (X′avg, Y′avg, Z′avg), and calculates desired metrics regarding the athletes (or people) being tracked, based on the movement of their body part label centroids BPL-C. Examples of metrics for various BPL-C body parts for each player that may be calculated by the Metrics Calculation Logic 212, include the athlete's (or person's) location, velocity, pose, power, contact between objects/body parts, energy transferred upon contact, balance, positioning, “effective aggression”, “ring generalship”, or other performance metrics of the athlete. An example of a velocity calculation for each BPL-C for Player A and Player B is provided in FIG. 13 and discussed further herein with FIG. 11. The results of the Metrics Calculation Logic 212 for each of the BPL-Cs for Player A and Player B, may be stored in a Player Metrics Server 218, or in another storage location. Also, any additional data or constants needed to calculate the desired metrics of a given player, player physical attributes (height, weight, length of limbs, weight or mass of body parts, and the like), or image frame rate (DT, e.g., 1/60 sec), or any other additional needed data, may also be stored or retrieved from the Player Metrics Server 218, if desired.)
1. and a high-speed interface configured to communicate the video images to the computer processors wherein the computer processor is further configured to compare a calculation for coordinates with a predetermined threshold / 13. and comparing the compare a calculation for coordinates with a predetermined threshold.(Devassy: [0077] Referring back to FIG. 5, next, block 506 saves the 17 BPL-Js and CL (XYZ; CL) for PA and PB for the current ICU on the Player Body Parts Data Server 216. Next, block 508 determines if all the ICUs have been checked and if not, block 510 advances to the next ICU and the logic proceeds back to block 502 to repeat the process for the next ICU. If the result of block 504 is Yes, all ICUs have been checked and the logic exits. [0078] Referring to FIGS. 6 and 8B and 9, a flow diagram 600 (FIG. 6) illustrates one embodiment of a process or logic for implementing the Centroid & Visibility/Occlusion Logic 208 (FIG. 2). The process 600 begins at block 602 which retrieves 17 BPL Joints (BPL-Js) from the Pose Estimation Logic or the Player Body Parts Data Server 216 having a data format including 3D coordinates (XYZ); Confidence Level (CL); and Orientation (ORTN), or (XYZ; CL; ORTN), for each Player for the current ICU. Next, block 604 calculates 14 Body Part Label Centroids (BPL-Cs) using the 17 BPL Joints (BPL-Js) with at least a 90% confidence level (CL>=0.9) for each Player. Any BPL-Js with a lower confidence level are not calculated for the current ICU for the current image frame, and the corresponding BPL-C=N/A (not available or not active or not valid), and will not be used in any visibility/occlusion determinations. [0079] Referring to FIG. 9 and FIG. 8B, the inputs (Table 902—FIG. 9) and outputs (Table 904) of the centroid calculation are shown. In particular, for the Head centroid (BPL-C1), five head points BPL-J1 to BPL-J5 (Nose, Left Eye, Right Eye, Left Ear, Right Ear) are used to calculate a centroid point (or set of points or pixels) to represent the Head centroid (BPL-C1), e.g., the point or pixel closest to the center of these five (5) head points in XYZ 3D space, as shown in the output table 904. Similarly, for the Torso area (BPL-C14), four (4) body joint points (Left Shoulder, Right Shoulder, Left Hip, Right Hip) are used to calculate a centroid point (or set of points or pixels) to represent the Torso centroid (BPL-C14), e.g., the point or pixel closest to the center of these four points in XYZ 3D space, as shown in the output table 904. For certain centroids, the centroid may be calculated as the center (or middle or average location) between two joints, such as Left Calf centroid (BPL-C2), would be the center distance between the Left Knee and Left Ankle in XYZ space, as shown in output table 904. For certain other centroids, the joint and the centroid may be the same region of the body as the joint, such as for Left Foot (use Left Ankle joint), Left Wrist (use Left Wrist joint), Right Ankle (use Right Ankle joint), Right Wrist (use Right Wrist joint).)
Devassy does not teach:
a force associated with physical impact upon the surface at each of to the one or more points
calculated force to a predetermined threshold corresponding to an impact associated with a potential injury risk
Greenblatt teaches:
1. A system for monitoring physical impact on a living body in substantially real time, the system comprising: / 13. A method for monitoring physical impact on a living body in substantially real time, the system comprising: (Greenblatt: abstract, Systems, methods, and computer-readable media are described for predicting a neurological injury to a participant in an activity. The activity can be, for example, an athletic activity that involves repeated, high-impact collisions between participants. Sensor data reflecting interactions between participants in the activity is received from various wearable and non-wearable sensors. The sensor data is evaluated in conjunction with a baseline neurological risk profile of a participant to determine a likelihood that the participant has suffered a potential neurological injury. If this likelihood meets a threshold risk level, an onsite request/response test is initiated to glean more information relating to the participant's condition. Response data associated with the onsite test is cognitively evaluated to determine an updated likelihood of neurological injury to the participant and a follow-up action is determined based on the updated likelihood of neurological injury. [0014]-[0036], Figures 1, 2A-B, 3, [0014] FIG. 1 is a schematic hybrid data flow/block diagram illustrating smart prediction of neurological injury in accordance with one or more example embodiments. FIGS. 2A and 2B are process flow diagrams of an illustrative method 200 for predicting a neurological injury to a participant in an activity in accordance with one or more example embodiments. Each of FIGS. 2A and 2B will be described in conjunction with FIG. 1 hereinafter.)
1. at least one camera configured to collect video images comprising a plurality of image frames at least a portion of each image frame including of the living body moving within a defined area, the video images comprising a sequence of high resolution data at a high frame rate; / 13. collecting video images comprising a plurality of image frames of the living body moving within a defined area, the video images comprising a sequence of high resolution data at a high frame rate; (Greenblatt: [0014] FIG. 1 is a schematic hybrid data flow/block diagram illustrating smart prediction of neurological injury in accordance with one or more example embodiments. FIGS. 2A and 2B are process flow diagrams of an illustrative method 200 for predicting a neurological injury to a participant in an activity in accordance with one or more example embodiments. Each of FIGS. 2A and 2B will be described in conjunction with FIG. 1 hereinafter. [0015] Referring first to FIG. 1, an environment 102 is depicted in which an activity involving multiple participants is occurring. In example embodiments, the activity is an athletic activity such as a sporting contest that by its nature involves repeated physical contact between participants, where such contact often includes high-impact collisions with significant amounts of force. In those example embodiments in which the activity is a sporting activity, the environment 102 may be a field, arena, stadium, or any other venue in which such an activity may take place. For ease of explanation, example embodiments of the invention will be described hereinafter with respect the example activity of an American football game. [0017] In example embodiments, the sensors also include sensors 108 disposed within the environment 102 but which are not integrated with or otherwise disposed on players' equipment. Such sensors may include, for example, image sensors to capture still images and/or video of player interactions on the field. The sensors 108 in the environment 102 may also include microphones to capture audio of player interactions on the field. In example embodiments, the sensors 108 further include vibration sensors, accelerometers, Global Positioning System (GPS) receivers, or the like to capture additional forms of sensor data relating to player interactions. )
1. a computer processor configured to receive the video images and execute a learning algorithm for tracking within the video images one or more of motion, velocity, and acceleration at one or more points on a surface of the living body and calculate in substantially real time a force associated with physical impact upon the surface at each of to the one or more points; / 13. receiving the video images within a computer processor and executing a learning algorithm for tracking within the video images one or more of motion, velocity, and acceleration at one or more points on a surface of the living body within the video images; calculating in substantially real time a force associated with physical impact upon the surface at each of the one or more points; (Greenblatt: [0022] In certain example embodiments, the cognitive engine 110 is a machine learning construct such as a type of neural network (e.g., a convolutional neural network) that is capable of being trained based on ground-truth data to more accurately determine the likelihood that a player has sustained a neurological injury. The ground-truth data may include, for example, image data, video data, and/or other forms of sensor data (e.g., force/acceleration data) known to be associated with the occurrence of neurological injury. The cognitive engine 110 may be trained to learn patterns from such historical ground-truth data using, for example, a computer vision-based approach. [0023] For example, the cognitive engine 110 may learn the types of collisions that have resulted in neurological injury in the past such as collisions that involve players between whom there is a significant (e.g., above a threshold value) deviation in weight or height; collisions that produce force/acceleration data that exceeds a threshold value; collisions that involve particular player positions or particular types of plays (e.g., a safety in American football tackling a receiver coming over the middle of the field); and so forth. Further, in certain example embodiments, ground-truth data including video data, image data, inertial sensor data, or the like relating to player interactions that were ultimately determined to have resulted in neurological injury but which were not detected on the field can also be input to the cognitive engine 110. The cognitive engine 110 can attempt to learn patterns from this data that can be used to avoid failing to detect subsequent incidents of neurological injury in similar situational circumstances.)
1. and a high-speed interface configured to communicate the video images to the computer processors wherein the computer processor is further configured to compare the calculated force to a predetermined threshold corresponding to an impact associated with a potential injury risk. / 13. and comparing the calculated force to a predetermined threshold corresponding to an impact associated with a potential injury risk. (Greenblatt: [0024] In example embodiments, the cognitive engine 110 performs the cognitive risk analysis at block 206 in real-time such that a determination can be made dynamically as to whether to initiate an onsite test for potential neurological injury to a player. More specifically, at block 208 of the method 200, in example embodiments, the cognitive engine 110 determines whether the likelihood of potential neurological injury to the player of interest determined based on the cognitive analysis performed at block 206 satisfies a threshold risk level. As used herein, and depending on the implementation, a first value satisfies a second value (e.g., a threshold value) if the first value is strictly greater than the second value; greater than or equal to the second value; strictly less than the second value; or less than or equal to the second value. [0025] In response to a negative determination at block 208 indicating that the likelihood of potential neurological injury to the player of interest does not meet the threshold risk level, the method 200 returns to block 202 where additional sensor data 112 is received and the cognitive analysis is again performed on this additional sensor data 112. In this manner, in example embodiments, player interactions are continually monitored and sensor data relating thereto is captured and cognitively analyzed on a continual basis throughout a sporting event until a determination is made that the likelihood of potential neurological injury to a player meets the threshold amount of risk for initiating the onsite testing. It should be appreciated that the cognitive analysis performed at block 206 can be performed with respect to different players at different iterations of the method (e.g., can be performed with respect to a first player involved in a collision and a second different player involved in a later collision) and/or can be performed in parallel for multiple players at any given iteration of the method 200. [0026] In response to a positive determination at block 208, the cognitive engine 110 may send an initiation signal 118 to an onsite testing engine 120 to initiate, at block 212 of the method 200, an onsite test to attempt to determine with more certainty whether a player of interest has suffered a neurological injury. In example embodiments, the onsite test may be request/response protocol designed to glean more information as to whether a player has suffered a neurological injury. In certain example embodiments, prior to initiating the onsite testing, the cognitive engine 110 (or the onsite testing engine 120) determines whether a threshold number of iterations has been reached for the onsite testing. In example embodiments, the onsite testing engine 120 only proceeds with the onsite testing if the threshold number of iterations has not been reached.)
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify Devassy’s method and system for multi-camera based machine learning of 2D body parts and to use it in Greenblatt’s smart prediction of neurological injury, as they are both directed towards methods and systems for machine learning and image analysis and assessment. The determination of obviousness is predicated upon the following findings: One skilled in the art would have been motivated to refine Devassy’s machine learning processes for multi-camera based machine learning of 2d body parts, and use it to improve a machine learning algorithm that can predict high-impact collisions as proposed by Greenblatt, in order to further take into account parameters for bodily injury to individuals, and more accurately represent the level of impact of the overall collision . Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Devassy, while the teaching of Greenblatt continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of assessing both the force of the objects in an environment and the collision risks it presents to individuals and the possibility of bodily harm. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Consider Claims 8, 10-11, 20 and 22-23.
8. (Canceled)
10-11. (Canceled)
20. (Canceled)
22-23. (Canceled)
Consider Claims 2 and 14.
The combination of Devassy and Greenblatt teaches:
2. (Original) The system of claim 1, wherein the living body is a human body and the one or more points are associated with a human head. / 14. (Original) The method of claim 13, wherein the living body is a human body and the one or more points are associated with a human head. (Greenblatt: [0012] In example embodiments, a neurological injury may include any injury to the head, neck, or spine including, without limitation, a concussion, a brain hemorrhage, a spinal injury that may lead to temporary or permanent paralysis, or the like. While example embodiments are described herein with respect to predicting neurological injury, it should be appreciated that such embodiments are also applicable to other types of physical injuries. In addition, while example embodiments of the invention represent a technical improvement over conventional injury detection techniques, such example embodiments can also be used in conjunction with one or more existing concussion diagnostic techniques to supplement and/or enhance such techniques. [0035] n addition, in example embodiments, the request/response protocol for the onsite testing may include questions/requests designed to test the player's range of physical movement. For instance, the player's eye movement may be tracked by asking the player to follow with her eyes the changing position of a light. The light may be, for example, emitted from a projection element integrated in the player's helmet. As another non-limiting example, the player may be asked to perform a specific action or a specific series of actions. The player's response can then be evaluated as part of the cognitive analysis performed at block 216 to determine how accurately the player performed the actions. For instance, a player can be asked to walk in a straight line for some period of time; raise his head for some period of time; raise one leg followed by the other leg; or any other one or more physical movements. Video data, inertial sensor data, or the like can be captured of the player's response to such requests and provided as part of the request/response data to the cognitive engine 110 for cognitive analysis. [0036] FIG. 3 is a process flow diagram of an illustrative method 300 for learning enhancements to smart prediction of neurological injury in accordance with one or more example embodiments. FIG. 3 will be described in conjunction with FIG. 1 hereinafter. At block 302 of the method 300, the cognitive engine 110 may receive feedback data 126. The feedback data 126 may include, for example, output data corresponding to prior cognitive analyses performed by the cognitive engine 110. For instance, the feedback data 126 may include data associated with a scenario in which a player suffered a neurological injury and the cognitive analysis revealed a high likelihood of neurological injury; data associated with a scenario in which a player suffered a neurological injury but the cognitive analysis revealed a low or ambiguous likelihood of neurological injury; data associated with a scenario in which a player did not suffer a neurological injury and the cognitive analysis revealed a low likelihood of neurological injury; and/or data associated with a scenario in which a player did not suffer a neurological injury but the cognitive analysis revealed a high likelihood of neurological injury. Devassy: [0079] Referring to FIG. 9 and FIG. 8B, the inputs (Table 902—FIG. 9) and outputs (Table 904) of the centroid calculation are shown. In particular, for the Head centroid (BPL-C1), five head points BPL-J1 to BPL-J5 (Nose, Left Eye, Right Eye, Left Ear, Right Ear) are used to calculate a centroid point (or set of points or pixels) to represent the Head centroid (BPL-C1), e.g., the point or pixel closest to the center of these five (5) head points in XYZ 3D space, as shown in the output table 904. Similarly, for the Torso area (BPL-C14), four (4) body joint points (Left Shoulder, Right Shoulder, Left Hip, Right Hip) are used to calculate a centroid point (or set of points or pixels) to represent the Torso centroid (BPL-C14), e.g., the point or pixel closest to the center of these four points in XYZ 3D space, as shown in the output table 904. For certain centroids, the centroid may be calculated as the center (or middle or average location) between two joints, such as Left Calf centroid (BPL-C2), would be the center distance between the Left Knee and Left Ankle in XYZ space, as shown in output table 904. For certain other centroids, the joint and the centroid may be the same region of the body as the joint, such as for Left Foot (use Left Ankle joint), Left Wrist (use Left Wrist joint), Right Ankle (use Right Ankle joint), Right Wrist (use Right Wrist joint).)
Consider Claims 3 and 15.
The combination of Devassy and Greenblatt teaches:
3. (Original) The system of claim 1, wherein the defined area is field of play. / 15. (Original) The method of claim 13, wherein the defined area is a field of play. (Greenblatt: [0015] Referring first to FIG. 1, an environment 102 is depicted in which an activity involving multiple participants is occurring. In example embodiments, the activity is an athletic activity such as a sporting contest that by its nature involves repeated physical contact between participants, where such contact often includes high-impact collisions with significant amounts of force. In those example embodiments in which the activity is a sporting activity, the environment 102 may be a field, arena, stadium, or any other venue in which such an activity may take place. For ease of explanation, example embodiments of the invention will be described hereinafter with respect the example activity of an American football game. [0017], [0023], [0039] FIG. 4 is a schematic diagram of an illustrative networked architecture 400 configured to implement one or more example embodiments of the disclosure. The illustrative networked architecture 400 includes one or more cognitive processing servers 402 configured to communicate via one or more networks 406 with one or more field devices 404. The field device(s) 404 may include devices that are used in connection with an activity involving physical contact such as an athletic contest. The field device(s) 404 may include, without limitation, a personal computer (PC), a tablet, a smartphone, a wearable device, a voice-enabled device, or the like. The field device(s) 404 may further include one or more sensors disposed in an environment in which the activity is taking place or integrated, affixed, or otherwise associated with equipment, uniforms, or the like worn by participants in the activity. Such sensors may capture the sensor data 112 evaluated by the cognitive engine 110 (FIG. 1). Such sensors may include, without limitation, an inertial sensor (e.g., an accelerometer, a gyroscope, etc.); a vibration sensor; a force sensor; an image sensor; a sensor that takes biophysical measurements (e.g., a blood pressure sensor, a heart rate sensor; an electrocardiography (EKG) sensor or the like that measures electrical activity of the heart; a sensor to measure brain activity; and so forth. Devassy: [0079] Referring to FIG. 9 and FIG. 8B, the inputs (Table 902—FIG. 9) and outputs (Table 904) of the centroid calculation are shown. In particular, for the Head centroid (BPL-C1), five head points BPL-J1 to BPL-J5 (Nose, Left Eye, Right Eye, Left Ear, Right Ear) are used to calculate a centroid point (or set of points or pixels) to represent the Head centroid (BPL-C1), e.g., the point or pixel closest to the center of these five (5) head points in XYZ 3D space, as shown in the output table 904. Similarly, for the Torso area (BPL-C14), four (4) body joint points (Left Shoulder, Right Shoulder, Left Hip, Right Hip) are used to calculate a centroid point (or set of points or pixels) to represent the Torso centroid (BPL-C14), e.g., the point or pixel closest to the center of these four points in XYZ 3D space, as shown in the output table 904. For certain centroids, the centroid may be calculated as the center (or middle or average location) between two joints, such as Left Calf centroid (BPL-C2), would be the center distance between the Left Knee and Left Ankle in XYZ space, as shown in output table 904. For certain other centroids, the joint and the centroid may be the same region of the body as the joint, such as for Left Foot (use Left Ankle joint), Left Wrist (use Left Wrist joint), Right Ankle (use Right Ankle joint), Right Wrist (use Right Wrist joint). [0081] Referring to FIG. 12, a table 1200 shows sample values for XYZ coordinates for the 14 BPL-Cs for Player A (PA) and Player B (PB) for Image Frame 1, and Frames (2) to (N), for ICU1. It also shows a sample structure for ICU2 to ICUM, for M ICUs. Any number of ICUs may be used and positioned around the perimeter of the ring or sports arena, to get multiple views of the players on the field or in the ring, if desired. The more ICUs, the better likelihood that most BPL-C views will not be occluded. [0082] Referring back to FIG. 6, after block 604 calculates the centroids, block 606 retrieves Body Mask/Areas from the Server 216 for one Player (e.g., head area, torso area), such as the masks/areas (or sets of pixels) 756 (body) and 754 (head) shown in FIG. 7B.)
Consider Claims 4 and 16.
The combination of Devassy and Greenblatt teaches:
4. (Original) The system of claim 3, wherein the at least one camera comprises a plurality of cameras, each camera positioned to at least partially surround the field of play. / 16. (Original) The method of claim 15, wherein the at least one camera comprises a plurality of cameras, each camera positioned to at least partially surround the field of play. (Greenblatt: [0015] Referring first to FIG. 1, an environment 102 is depicted in which an activity involving multiple participants is occurring. In example embodiments, the activity is an athletic activity such as a sporting contest that by its nature involves repeated physical contact between participants, where such contact often includes high-impact collisions with significant amounts of force. In those example embodiments in which the activity is a sporting activity, the environment 102 may be a field, arena, stadium, or any other venue in which such an activity may take place. For ease of explanation, example embodiments of the invention will be described hereinafter with respect the example activity of an American football game. [0017], [0023], [0039] FIG. 4 is a schematic diagram of an illustrative networked architecture 400 configured to implement one or more example embodiments of the disclosure. The illustrative networked architecture 400 includes one or more cognitive processing servers 402 configured to communicate via one or more networks 406 with one or more field devices 404. The field device(s) 404 may include devices that are used in connection with an activity involving physical contact such as an athletic contest. The field device(s) 404 may include, without limitation, a personal computer (PC), a tablet, a smartphone, a wearable device, a voice-enabled device, or the like. The field device(s) 404 may further include one or more sensors disposed in an environment in which the activity is taking place or integrated, affixed, or otherwise associated with equipment, uniforms, or the like worn by participants in the activity. Such sensors may capture the sensor data 112 evaluated by the cognitive engine 110 (FIG. 1). Such sensors may include, without limitation, an inertial sensor (e.g., an accelerometer, a gyroscope, etc.); a vibration sensor; a force sensor; an image sensor; a sensor that takes biophysical measurements (e.g., a blood pressure sensor, a heart rate sensor; an electrocardiography (EKG) sensor or the like that measures electrical activity of the heart; a sensor to measure brain activity; and so forth Devassy: [0079] Referring to FIG. 9 and FIG. 8B, the inputs (Table 902—FIG. 9) and outputs (Table 904) of the centroid calculation are shown. In particular, for the Head centroid (BPL-C1), five head points BPL-J1 to BPL-J5 (Nose, Left Eye, Right Eye, Left Ear, Right Ear) are used to calculate a centroid point (or set of points or pixels) to represent the Head centroid (BPL-C1), e.g., the point or pixel closest to the center of these five (5) head points in XYZ 3D space, as shown in the output table 904. Similarly, for the Torso area (BPL-C14), four (4) body joint points (Left Shoulder, Right Shoulder, Left Hip, Right Hip) are used to calculate a centroid point (or set of points or pixels) to represent the Torso centroid (BPL-C14), e.g., the point or pixel closest to the center of these four points in XYZ 3D space, as shown in the output table 904. For certain centroids, the centroid may be calculated as the center (or middle or average location) between two joints, such as Left Calf centroid (BPL-C2), would be the center distance between the Left Knee and Left Ankle in XYZ space, as shown in output table 904. For certain other centroids, the joint and the centroid may be the same region of the body as the joint, such as for Left Foot (use Left Ankle joint), Left Wrist (use Left Wrist joint), Right Ankle (use Right Ankle joint), Right Wrist (use Right Wrist joint). [0081] Referring to FIG. 12, a table 1200 shows sample values for XYZ coordinates for the 14 BPL-Cs for Player A (PA) and Player B (PB) for Image Frame 1, and Frames (2) to (N), for ICU1. It also shows a sample structure for ICU2 to ICUM, for M ICUs. Any number of ICUs may be used and positioned around the perimeter of the ring or sports arena, to get multiple views of the players on the field or in the ring, if desired. The more ICUs, the better likelihood that most BPL-C views will not be occluded. [0082] Referring back to FIG. 6, after block 604 calculates the centroids, block 606 retrieves Body Mask/Areas from the Server 216 for one Player (e.g., head area, torso area), such as the masks/areas (or sets of pixels) 756 (body) and 754 (head) shown in FIG. 7B.)
Consider Claims 7 and 19.
The combination of Devassy and Greenblatt teaches:
7. (Currently amended) The system of claim 1, wherein the learning algorithm is a convolutional neural network. / 19. (Currently amended) The method of claim 13, wherein the learning algorithm is a convolutional neural network. (Greenblatt: [0022] In certain example embodiments, the cognitive engine 110 is a machine learning construct such as a type of neural network (e.g., a convolutional neural network) that is capable of being trained based on ground-truth data to more accurately determine the likelihood that a player has sustained a neurological injury. The ground-truth data may include, for example, image data, video data, and/or other forms of sensor data (e.g., force/acceleration data) known to be associated with the occurrence of neurological injury. The cognitive engine 110 may be trained to learn patterns from such historical ground-truth data using, for example, a computer vision-based approach.)
Consider Claims 9 and 21.
The combination of Devassy and Greenblatt teaches:
9. (Currently amended) The system of claim 1, wherein the computer processor is further configured to generate an alert message to a user interface when the calculated force exceeds a predetermined threshold corresponding to one or more of a high risk impact and a low limit impact threshold. / 21. (Currently amended) The method of claim 13, wherein the computer processor is further configured to generate an alert message to a user interface when the calculated force exceeds the predetermined threshold corresponding to one or more of a high risk impact and a low limit impact threshold. (Greenblatt: [0024] In example embodiments, the cognitive engine 110 performs the cognitive risk analysis at block 206 in real-time such that a determination can be made dynamically as to whether to initiate an onsite test for potential neurological injury to a player. More specifically, at block 208 of the method 200, in example embodiments, the cognitive engine 110 determines whether the likelihood of potential neurological injury to the player of interest determined based on the cognitive analysis performed at block 206 satisfies a threshold risk level. As used herein, and depending on the implementation, a first value satisfies a second value (e.g., a threshold value) if the first value is strictly greater than the second value; greater than or equal to the second value; strictly less than the second value; or less than or equal to the second value. [0025]-[0026], [0029] Referring now to FIG. 2B, at block 218 of the method 200, in example embodiments, the cognitive engine 110 determines whether the updated likelihood of neurological injury to the player of interest satisfies (e.g., is greater than or equal to) a first threshold value. The first threshold value may be a value indicative of greater certainty that the player has suffered a neurological injury. As such, in response to a positive determination at block 218, the cognitive engine 110 may initiate an interrupt 124 at block 222 of the method 200 to allow for more comprehensive neurological testing to be performed on the player. In example embodiments, the interrupt 124 may be a signal or notification (e.g., a light, a message, a speaker announcement, a highlight shown on video monitors in the environment 102, etc.) that is potentially sent to an onsite device to inform a coach, manager, medical professional, or the like to remove the player from the environment 102 (e.g., a playing field) in order to perform a more comprehensive evaluation of the player for potential neurological injury. For instance, the onsite testing that is initially performed on the player to determine the updated likelihood of neurological injury to the player may be an initial phase of a concussion protocol, while the more comprehensive evaluation performed after the interrupt 124 is issued may be a later, more detailed phase of the concussion protocol. In example embodiments, after the interrupt 124 is issued, the activity may be temporarily halted (e.g., a timeout taken) to allow the player to leave or be removed from the field for evaluation on the sideline.)
Consider Claims 12 and 24.
The combination of Devassy and Greenblatt teaches:
12. (Original) The system of claim 1, further comprising a memory in communication with the computer processor for recording and cataloging video images and impact data corresponding thereto. / 24. (Original) The method of claim 13, further comprising recording and cataloging video images and impact data corresponding thereto in a memory in communication with the computer processor. (Greenblatt: [0041] In an illustrative configuration, the cognitive processing server 402 may include one or more processors (processor(s)) 408, one or more memory devices 410 (generically referred to herein as memory 410), one or more input/output (“I/O”) interface(s) 412, one or more network interfaces 414, and data storage 418. The cognitive processing server 402 may further include one or more buses 416 that functionally couple various components of the cognitive processing server 402. [0042], [0052] Referring now to other illustrative components of the cognitive processing server 402, the input/output (I/O) interface(s) 412 may facilitate the receipt of input information by the cognitive processing server 402 from one or more I/O devices as well as the output of information from the cognitive processing server 402 to the one or more I/O devices. The I/O devices may include any of a variety of components such as a display or display screen having a touch surface or touchscreen; an audio output device for producing sound, such as a speaker; an audio capture device, such as a microphone; an image and/or video capture device, such as a camera; a haptic unit; and so forth. Any of these components may be integrated into the cognitive processing server 402 or may be separate. The I/O devices may further include, for example, any number of peripheral devices such as data storage devices, printing devices, and so forth. Devassy: [0039] FIG. 2 illustrates various components (or devices or logic) of a system and method 200 for multi-camera, 3D body part labeling and performance metrics calculation, including Semantic Labeling & Metrics Logic 202, which includes ICU/Player ID Logic 204, Pose Estimation Model Logic 206, Centroid & Visibility/Occlusion Logic 208, Transform/Averaging & Player Tracking Logic 210, and Metrics Calculation Logic 212. The Player ID Logic 204 receives digital data from the 2D/3D Image Capture Units (ICU1-ICU4) 110A-110D, where each pixel in each image from each ICU may have a format of X,Y,Z; R,G,B, as described herein. The Player ID Logic 204 may also store the data for each pixel in the image frame received from each ICU onto an ICU/Player ID Server 214, e.g., in a 2D (row(i),column(j)), e.g., 480 rows×640 columns of pixels, table or matrix format, where each cell in the table represents a pixel, such as pixels 860 shown in FIG. 8A (discussed more hereinafter). In some embodiments, the ICUs may store the 2D/3D image frame data (XYZ; RGB) directly on the ICU/Player ID Server 214.)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAHMINA N ANSARI whose telephone number is (571)270-3379. The examiner can normally be reached on IFP Flex - Monday through Friday 9 to 5.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O' NEAL MISTRY can be reached on 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
TAHMINA N. ANSARI
Examiner
Art Unit 2674
January 30, 2026
/TAHMINA N ANSARI/Primary Examiner, Art Unit 2674