DETAILED CORRESPONDENCE
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This final office action on merits is in response to the communication received on 21 November 2025. Amendments to claims 1-19 are acknowledged and have been carefully considered. Claims 1-19 are pending and considered below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-7, 11-14, and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Jorasch et al. (20220006813) in view of Xin et al. (CN112861624) and Tran (20160287166).
Claims 1, 18, and 19: Jorasch discloses an information processing apparatus, method, and storage medium comprising; a central processing unit (CPU) ([68 “system 100 may comprise a plurality of resource devices 102a-n in communication via or with a network 104. According to some embodiments, system 100 may comprise a plurality of user devices 106a-n, a plurality of peripheral devices 107a-n and 107p-z, third-party device 108, and/or a central controller,” 69-73]) configured to:
receive a detection result from a camera, wherein the camera is in a space ([219, 220, 496-501, 512 “Cameras 6352a-e capture a video signal that is transmitted to house controllers 6305a-b via a wired or wireless connection for storage or processing. In some embodiments, house controllers 6305a-b may then transmit the video to central controller 110. In other embodiments, any of cameras 6352a-e send a video feed directly to central controller 110. In one embodiment, a game player might bring up the video feed from one or more of cameras 6352a-e in order to keep track of the location of other game players. Such a video feed, for example, could allow a first player in bedroom 6321b to see a feed from camera 6352e to identify that a second game player had gone back to house,” Figs. 63A, 63B]);
recognize a user in the space based the detection result ([496-502, 503 “Identification readers 6308a and 6308b are positioned at the entry points 6310a and 6310c, respectively, and serve to identify people and allow/deny access as they attempt to move through the entry points. For example, identification readers can be RFID readers to scan a badge, a camera to identify the person via face recognition, a scanner to identify a person by a carried user device, a microphone for voice recognition,” 504])
Jorasch does not explicitly disclose, however Xin discloses:
determine skeleton information from the detection result, wherein the skeleton information includes a coordinate position of each part of the user (Page 4 “result display module 4 uses a django frame to realize foreground and background, human skeletonization data is obtained through processing a series of data, the data is stored in a pseudo-color map, then coordinates of a point cloud of 2d to 3d are obtained through conversion of point cloud coordinates, and joint angles are calculated through coordinates. A3d skeleton map of the person is drawn through python visualization through the coordinates of the 3d point cloud and the connection relation between the points, a front view of the 3d skeleton of the person is obtained through certain rotation,” Page 15 “Recognizing human body postures based on random forests: on the basis of the previous step of research, the depth image skeletonization model can be trained on the basis of transfer learning, so that skeletonization data can be obtained through the depth image, and it is known that different postures of a human body have certain characteristics at each joint angle of the human body. The invention can obtain the relationship between the posture and the included angle by calculating the trunk angle, the anteflexion angle, the hip angle, the shoulder angle and the knee angle. The invention calculates the classified attitude joint angles and makes attitude labels, and the relation between the included angle and the attitude can be completely learned through random forests, thereby obtaining the attitude recognition model of the invention,”) Examiner Note: Under a broadest reasonable interpretation Examiner interprets the disclosures of Xin with respect to the detection and processing of skeletal data to determine positional coordinates of users/patients as a way to detect user related actions and relevant positions of body elements with respect to the detection of skeletal information and movement detection.
determine an action of the user based on the skeleton information (Page 4 “gesture recognition part, the invention prints gesture labels on images in advance, then calculates included angles of all parts of the body and stores the included angles as a training set, and finally obtains the gesture recognition model by taking the gesture labels and the included angles as training data through random forests,” Page 14 “migration learning is carried out by utilizing network parameters of the opencast network, and depth images corresponding to rgb images one by one and labels which are well made in advance are used as opencast network input to train the depth image skeletonization model of the invention. For the gesture recognition part, the invention prints gesture labels on images in advance, then calculates included angles of all parts of the body and stores the included angles as a training set, and finally obtains the gesture recognition model by taking the gesture labels and the included angles as training data through random forests,”).
Therefore it would be obvious for Jorasch to determine skeleton information from the detection result, wherein the skeleton information includes a coordinate position of each part of the user and determine an action of the user based on the skeleton information as per the steps of Xin in order to precisely detect and process patient movements and specifically determine movements of a wide variety of body elements and result in the determination and specificity of user related actions.
Jorasch does not explicitly disclose, however Tran discloses:
determine a plurality of health points based on the action of the user ([37 “wear one or more wearable patient monitoring appliances such as wrist-watches or clip on devices or electronic jewelry to monitor the patient. One wearable appliance such as a wrist-watch includes sensors 40,”]), wherein the plurality of health points indicates a performance of a healthy behavior from the user ([37 “sensors 40 include standard medical diagnostics for detecting the body's electrical signals emanating from muscles (EMG and EOG) and brain (EEG) and cardiovascular system (ECG). Leg sensors can include piezoelectric accelerometers designed to give qualitative assessment of limb movement. Additionally, thoracic and abdominal bands used to measure expansion and contraction of the thorax and abdomen respectively….One or more position sensors can be used for detecting orientation of body (lying on left side, right side or back) during sleep diagnostic recordings. Each of sensors 40 can individually transmit data to the server 20 using wired or wireless transmission,” 45, 50, 148 “system will detect patient skeleton structure, stride and frequency; and based on this information to judge whether patient has joint problem, asymmetrical bone structure, among others. The system can store historical gait information, and by overlaying current structure to the historical (normal) gait information, gait changes can be detected,” 165 “wearable appliance provides an in-depth, cost-effective mechanism to evaluate a patient's health condition. Certain cardiac conditions can be controlled, and in some cases predicted, before they actually occur. Moreover, data from the patient can be collected and analyzed while the patient participates in their normal, day-to-day activities,” 267 “to best analyze a patient's fitness or health, additional patient data is utilized by a fitness analyzer. This data may include personal data, such as date of birth, ethnic group, sex, physical activity level, and address. The data may further include clinical data such as a visit identification, height, weight, date of visit, age, blood pressure, pulse rate, respiration rate, and so forth,”]); and
perform a process to display a notification of the plurality of health points to the user ([149 “provides a patient interface 90 to assist the patient in easily accessing information. In one embodiment, the patient interface includes a touch screen; voice-activated text reading; one touch telephone dialing; and video conferencing. The touch screen has large icons that are pre-selected to the patient's needs, such as his or her favorite web sites or application programs,” 163 “system allows patients to conduct a low-cost, comprehensive, real-time monitoring of their vital parameters such as ambulation and falls. Information can be viewed using an Internet-based website, a personal computer, or simply by viewing a display on the monitor. Data measured several times each day provide a relatively comprehensive data set compared to that measured during medical appointments separated by several weeks or even months. This allows both the patient and medical professional to observe trends in the data,”]).
Therefore it would be obvious for Jorasch to determine a plurality of health points based on the action of the user, wherein the plurality of health points indicates a performance of a healthy behavior from the user and perform a process to display a notification of the plurality of health points to the user as per the steps of Tran in order to precisely detect and process patient movements and specifically determine movements of a wide variety of body elements and result in the determination and specificity of user related actions.
Claim 2: Jorasch in view of Xin and Tran disclose the information processing apparatus according to claim 1, and Jorasach further discloses:
the camera captures an image ([219, 220, 496-501, 512 “Cameras 6352a-e capture a video signal that is transmitted to house controllers 6305a-b via a wired or wireless connection for storage or processing. In some embodiments, house controllers 6305a-b may then transmit the video to central controller 110. In other embodiments, any of cameras 6352a-e send a video feed directly to central controller 110. In one embodiment, a game player might bring up the video feed from one or more of cameras 6352a-e in order to keep track of the location of other game players. Such a video feed, for example, could allow a first player in bedroom 6321b to see a feed from camera 6352e to identify that a second game player had gone back to house,” Figs. 63A, 63B]),
the detection result is the captured image ([219, 220, 496-501, 512 “Cameras 6352a-e capture a video signal that is transmitted to house controllers 6305a-b via a wired or wireless connection for storage or processing. In some embodiments, house controllers 6305a-b may then transmit the video to central controller 110. In other embodiments, any of cameras 6352a-e send a video feed directly to central controller 110. In one embodiment, a game player might bring up the video feed from one or more of cameras 6352a-e in order to keep track of the location of other game players. Such a video feed, for example, could allow a first player in bedroom 6321b to see a feed from camera 6352e to identify that a second game player had gone back to house,” Figs. 63A, 63B]), and
the CPU is further configured to ([68 “system 100 may comprise a plurality of resource devices 102a-n in communication via or with a network 104. According to some embodiments, system 100 may comprise a plurality of user devices 106a-n, a plurality of peripheral devices 107a-n and 107p-z, third-party device 108, and/or a central controller,” 69-73]):
and grant, to the user, the plurality of health points corresponding to the healthy behavior ([265 “central controller may infer user health status from game play. In various embodiments, one or more vital signs (e.g., blood pressure) may be obtained directly or indirectly from sensors. In various embodiments, the central controller may utilize user actions as an indicator of health state or status. If a user's game performance has declined, then this may be indicative of health problems (e.g., dehydration, fatigue, infection, heart attack, stroke, etc.). In various embodiments, game performance may be measured in terms of points scored, points scored per unit of time, opponents neutralized, levels achieved, objectives achieved, time lasted, skill level of opponents beaten,” 266, 267]),
Jorasch does not explicitly disclose, however Xin discloses:
analyze the captured image as the detection result (4 “migration learning is carried out by utilizing network parameters of the opencast network, and depth images corresponding to rgb images one by one and labels which are well made in advance are used as opencast network input to train the depth image skeletonization model of the invention. For the gesture recognition part, the invention prints gesture labels on images in advance, then calculates included angles of all parts of the body and stores the included angles as a training set, and finally obtains the gesture recognition model by taking the gesture labels and the included angles as training data through random forests,” Page 25 “task of detecting the posture of the aged human body, the corresponding depth image needs to be acquired in real time to acquire the skeleton image of the human body for the medical staff to see, so that certain accuracy of the skeleton image is required, and the invention is good at the aspect. Based on the task, the invention constructs a depth image acquisition system, acquires the human skeleton through the depth image and analyzes the human posture,”).
determine, based on the captured image, at least one of a specific posture of the user or a specific movement of the user (Page 3 “human skeletonization module is used for training a human skeletonization model capable of identifying the depth image, acquiring a training label by adopting an opencast model, and storing the rgb image which is extracted from paf and heatmap by using the opencast model as a label of a subsequent training model; then, transfer learning is carried out on an opencast network, the depth image is used as network input, and the obtained heatmap and paf are used as labels to train to obtain a human skeletonization model of the depth image,” Page 15 “Recognizing human body postures based on random forests: on the basis of the previous step of research, the depth image skeletonization model can be trained on the basis of transfer learning, so that skeletonization data can be obtained through the depth image, and it is known that different postures of a human body have certain characteristics at each joint angle of the human body. The invention can obtain the relationship between the posture and the included angle by calculating the trunk angle, the anteflexion angle, the hip angle, the shoulder angle and the knee angle,”);
determine, based on the at least one of the specific posture of the user or the specific movement of the user (Page 4 “the posture analysis module 3 converts the 2d coordinates obtained from the human skeletonization result by adopting a random forest to obtain human skeletonized coordinates in 3d point cloud, and then calculates the included angle of the joints of the human body according to the included angle between the vectors. And calculating an included angle of the labeled data, inputting the included angle and the label into a random forest for training to obtain a posture classification model, and only inputting the calculated included angle after skeletonization during subsequent posture recognition,”), a performance of at least one of a registered posture of the user or a registered movement of the user, wherein each of the registered posture of the user and the registered movement of the user corresponds to the healthy behavior (Page 4 “gesture recognition part, the invention prints gesture labels on images in advance, then calculates included angles of all parts of the body and stores the included angles as a training set, and finally obtains the gesture recognition model by taking the gesture labels and the included angles as training data through random forests. The method obtains a skeleton map of a human body through a trained depth image skeletonization model, then obtains a 3d skeleton map by converting the skeleton map into a 3d point cloud, calculates the angle of each joint of the human body, and obtains the posture of the human body through the trained posture recognition model,” Page 5 “train own posture classification model, and the invention adopts random forest. And converting the 2d coordinates obtained from the human body skeletonization result to obtain human body skeletonization coordinates in the 3d point cloud, and then calculating the joint angle of the human body according to the included angle between the vectors. And calculating an included angle of the labeled data, and inputting the included angle and the label into a random forest for training to obtain the posture classification model,”).
Therefore it would be obvious for Jorasch to analyze the captured image as the detection result, determine, based on the captured image, at least one of a specific posture of the user or a specific movement of the user, and determine, based on the at least one of the specific posture of the user or the specific movement of the user, a performance of at least one of a registered posture of the user or a registered movement of the user, wherein each of the registered posture of the user and the registered movement of the user corresponds to the healthy behavior as per the steps of Xin in order to precisely detect and process patient movements and specifically determine movements of a wide variety of body elements and result in the determination and specificity of user related actions.
Claim 3: Jorasch in view of Xin and Tran disclose the information processing apparatus according to claim 2, and Jorasach further discloses wherein CPU is further configured to determine the plurality of health points based on a difficulty level of the healthy behavior ([860 “user is playing a game and it is determined by AI accelerator 8060 that the user is performing poorly a signal can be sent back to user device 106a to adjust the difficulty to a more appropriate level,” 935, 1130 “usage pattern may correlate to a skill level in a game, and the central controller may utilize the inferred skill level to adjust the difficulty of a game,” 1295, 1311 “game could introduce more challenging opponents or adjust the player skill and make it more difficult to score goals. Likewise, if the players heart rate is elevated for an extended period of time, the game difficulty could be adjusted to allow for recovery of the heart and a slowing of the heat rate,” 1726-1732, 1810-1812, 1821 “maximum of 6 points (for example) may be allocated, with 1 point deducted from the maximum for each 10% deviation of the identified gesture from a reference gesture. In various embodiments, point allocation instructions specify that a predetermined number of points will be allocated if the identified gesture matches a reference gesture,”]).
Claim 4: Jorasch in view of Xin and Tran disclose the information processing apparatus according to claim 1 above, and Jorasach further discloses a memory configured to store specific information on the plurality of health points, wherein the CPU is further configured to:
determine a total value of the plurality of health points in a specific time period ([860 “user is playing a game and it is determined by AI accelerator 8060 that the user is performing poorly a signal can be sent back to user device 106a to adjust the difficulty to a more appropriate level,” 935, 1130 “usage pattern may correlate to a skill level in a game, and the central controller may utilize the inferred skill level to adjust the difficulty of a game,” 1295, 1311 “game could introduce more challenging opponents or adjust the player skill and make it more difficult to score goals. Likewise, if the players heart rate is elevated for an extended period of time, the game difficulty could be adjusted to allow for recovery of the heart and a slowing of the heat rate,” 1821 “maximum of 6 points (for example) may be allocated, with 1 point deducted from the maximum for each 10% deviation of the identified gesture from a reference gesture. In various embodiments, point allocation instructions specify that a predetermined number of points will be allocated if the identified gesture matches a reference gesture,”]); and
perform a process to display a notification of the total value of the plurality of health points to the user ([606, 1222 “two or more peripheral devices are configured to communicate with one another. The lines of communication may allow transmission of messages (e.g., chat messages, taunts, etc.), transmission of instructions, transmissions of alerts or notifications (e.g., your friend is about to start playing a game), and/or transmission of any other signals,” 1237 “player may be informed that 60% of players took a left at a similar juncture in the game, with an average subsequent score of 234 points. On the other hand, 40% of players took a right with an average subsequent score of 251. In various embodiments, a player may wish to see decisions of only a subset of other players. This subset of other players may be, for example, the players friends, or top players,” 1279 “rating of the user's ability to function well on a team. For example, a users mouse might store an evaluation of the user's team skills, such as by storing a rating (provided by other players or determined algorithmically by one or more game controllers) of 9 on a 10 point scale. When the user uses his mouse to play in a new game, that new game can access the 9/10 rating from the user's mouse and use the rating to match the user with other players of a similar team rating level,”]),
the process to display the notification of the plurality of health points is associated with the specific time period ([114 “central controller may include software for providing notifications and/or status updates,” 606 “app could provide notifications to users as to game location changes, time changes, player changes, cancellations, etc. Various embodiments contemplate that any other feedback data, or any other input data from a peripheral device, may be shown, may be shown over time, or may be shown in any other fashion,” 1021, 1222 “lines of communication may allow transmission of messages (e.g., chat messages, taunts, etc.), transmission of instructions, transmissions of alerts or notifications (e.g., your friend is about to start playing a game), and/or transmission of any other signals,” 1604 “point allocation instructions specify that a predetermined number of points (e.g., five points) will be allocated if the biometric reading matches a stored biometric reading from the authentic user and no points will be allocated otherwise. In various embodiments, point allocation instructions specify that a number of points will be allocated, up to a predetermined maximum number of points, based on (e.g., proportional to) the degree or confidence of a match between the biometric reading and a stored biometric reading from the authentic user,”])and
the performance of the healthy behavior is in the specific time period ([1606 “An AI module might review health and mental performance markers and make in-game suggestions to improve game play. For example, if the module detects elevated cortisol levels from metabolite sensors or an increase in sweat secretion from a sweat sensor, the module could provide feedback to the player to calm down, breathe, or relax,” 1608 “player's skill level might vary with fatigue, health, time of day, amount of recent practice or gameplay and other factors. The inputs of the devices according to various embodiments could be utilized to train an AI module that calculates a relative skill level, based upon long-run player performance adjusted for fatigue, time of day and other factors,”]).
Claim 5: Jorasch in view of Xin and Tran disclose the information processing apparatus according to claim 2 above, and Jorasch further discloses wherein the
camera is in a display device ([134 “peripheral device might have the capability to output images, video, characters (e.g., on a simple LED screen), lights (e.g., activating or deactivating one or more LED lights or optical fibers on the peripheral device), laser displays, audio, haptic outputs (e.g., vibrations), altered temperature (e.g., a peripheral device could activate a heating element where the user's hand is located), electrical pulses, smells, scents, or any other sensory output or format….peripheral device may have the capability to input images (e.g., with a camera), audio (e.g., with a microphone), touches (e.g., with a touchscreen or touchpad), clicks, key presses, motion (e.g., with a mouse or joystick), temperature, electrical resistance readings, positional readings,”]),
the display device is in the space ([134, 179, 192 “screen 3815 may be a display screen, touch screen, or any other screen. Screen 3815 may be a curved display using LCD, LED, mini-LED, TFT, CRT, DLP, or OLED technology or any other display technology that can render pixels over a flat or curved surface, or any other display technology. Screen 3815 may be covered by a chemically tempered glass or glass strengthened in other ways, e.g., Gorilla® Glass®, or covered with any other materials to stand up to the wear and tear of repeated touch and reduce scratches, cracks, or other damage. One use of a display screen 3815 is to allow images or video, such as dog image 3830, to be displayed to a user. Such an image could be retrieved from user table 700 (e.g., field 726) by central controller 110. Images displayed to a user could include game updates, game tips, game inventory lists, advertisements, promotional offers, maps, work productivity tips, images of other players or co-workers, educational images, sports scores and/or highlights, stock prices, news headlines, and the like. In some embodiments, display screen 3815 displays a live video connection with another user which may result in a greater feeling of connection between the two users,”]),
the camera detects specific information regarding at least one person around the display device ([245 “Specifications may include the quantities of various components (e.g., a mouse may have two or three buttons; e.g., a mouse may have one, two, or more LED lights; e.g., a camera peripheral may have one, two, three, etc., cameras). Specifications may include the capabilities of a given component. For example, a specification may indicate the resolution of a camera, the sensitivity of a mouse button, the size of a display screen, or any other capability, or any other functionality,” 403 “Engagement indicator(s) field 5312 may store an indication of one or more indicators used to determine an engagement level. Indicators may include biometrics as described above. Exemplary indicators include signals derived from voice, such as rapid speech, tremors, cadence, volume, etc. Exemplary indicators may include posture. For example, when a person is sitting in their chair or leaning forward, they may be presumed to be engaged with the meeting,” 553 “Subject field 7310 may store an indication of a user who is the subject of a sensor. A subject may be a person detected by the sensor, a person who triggers a sensor, a person identifiable by sensor data, and/or anyone who contributes to the generation of sensor data,”]), and
the at least one person is different from the user ([527 “Projector 6367b in room 6321c could also project on the walls the game avatar or player name of the second user to alert the individual that play from another person was requested,” 1040 “central controller knows each type of meeting taking place (informational, innovation, commitment and alignment). Based on the meeting type, the central controller displays meeting specific information on display devices and to attendees in advance. Innovation sessions should have lighter/more fun messages. On the other hand, commitment meetings might prevent all such messages.”]).
Claim 6: Jorasch in view of Xin and Tran disclose the information processing apparatus according to claim 5 above, and Jorasch further discloses:
wherein the CPU is further configured to perform, based on the grant of the plurality of health points, the process to display the notification of the plurality of health points on the display device ([114 “central controller may include software for providing notifications and/or status updates….Notifications or status updates may be sent to peripheral devices, user devices, smartphones, or to any other devices,” 260 “central controller 110 may track user gameplay, according to some embodiments. The central controller 110 may track one or more of: peripheral device use; game moves, decisions, tactics, and/or strategies; vital readings (e.g., heart rate, blood pressure, etc.); team interactions; ambient conditions (e.g., dog barking in the background; local weather); or any other information. In various embodiments, the central controller 110 may track peripheral device activity or use. This may include button presses, key presses, clicks, double clicks, mouse motions, head motions, hand motions, motions of any other body part, directions moved, directions turned, speed moved, distance moved, wheels turned (e.g., scroll wheels turned), swipes (e.g., on a trackpad), voice commands spoken, text commands entered, messages sent, or any other peripheral device interaction,” 265 “game performance may be measured in terms of points scored, points scored per unit of time, opponents neutralized, levels achieved, objectives achieved, time lasted, skill level of opponents beaten,”]).
Claim 7: Jorasch in view of Xin and Tran disclose the information processing apparatus according to claim 6 above, and Jorasch further discloses wherein the CPU ([321 “central controller may include software, programs, modules, or the like, including: an operating system; communications software, such as software to manage phone calls, video calls, and texting with meeting owners and meeting participants; an artificial intelligence (Al) module; and/or any other software,” 322-325]) is further configured to:
analyze, based on the detection result, a situation of the at least one person around the display device ([1228 “measure the degree to which a user is focusing on or participating in a task, meeting, or other situation. In various embodiments, it may be desirable to ascertain an engagement level of a group of users, such as an audience of a lecture, participants in a meeting, players in a game, or some other group of users,” 1229 “engagement may be measured in terms of inputs provided to a peripheral device. These may include button or key presses, motions, motions of the head, motions of a mouse, spoken words, eye contact (e.g., as determined using a camera), or any other inputs. Engagement may also be ascertained in terms of sensor readings, such as heart rate or skin conductivity,” 1318 “proliferation of external sensors allow for the data collected to be included as part of a user's in-game experience and reflect an indication of what is taking place in the real world,” 1319-1324]); and
perform the process to display the notification of the plurality of health points on the display device, at a time the situation satisfies a specific condition ([733-736, 737 “mobile phone or wearable device (watch) is used for collection of biometric feedback during the meeting to the central controller and for meeting owner awareness. Real-time information to include; heart rate, breathing rate, and blood pressure. Analysis of data from all attendees alerts the meeting owner for appropriate action. This includes: tension (resulting from higher heart and breathing rates), boredom from lowering heart rates during the meeting and overall engagement with a combination of increased rates within limits,” 738-740]).
Claim 11: Jorasch in view of Xin and Tran discloses the information processing apparatus according to claim 10 above and Jorasch further discloses wherein the contents of the notification include at least one of
information regarding grant of the plurality of health points at a specific time ([265 “user's game performance has declined, then this may be indicative of health problems (e.g., dehydration, fatigue, infection, heart attack, stroke, etc.). In various embodiments, game performance may be measured in terms of points scored, points scored per unit of time,”]),
a reason for the grant of the plurality of health points, or a recommended stretch ([1005 “when a meeting participant has been in a long meeting, the chair could send a signal to the room controller indicating how long it had been since that participant had stood up. If that amount of time is greater than 60 minutes, for example, the central controller could signal to the chair to output a series of three buzzes as a reminder for the participant to stand up….send a signal to the participant device with verbal or text reminders to stretch, walk, take some deep breaths, hydrate,”]).
Claim 12: Jorasch in view of Xin and Tran discloses the information processing apparatus according to claim 1 above and Jorasch further discloses wherein the CPU is further configured to:
acquire, based on the detection result, a situation of a plurality of persons in the space; and control, based on the situation, at least one output device to output at least one of a video, an audio, or lighting for a space production, wherein the at least one output device is in the space ([92 “Output device 325 may include any component or device for outputting or conveying information, such as to a user. Output device 325 may include a display screen, speaker, light, backlight, projector, LED, touch bar, haptic actuator, or any other output device. Sensor 330 may include any component or device for receiving or detecting environmental, ambient, and/or circumstantial conditions, situations, or the like. Sensor 330 may include a microphone, temperature sensor, light sensor, motion sensor, accelerometer, inertial sensor, gyroscope, contact sensor, angle sensor, or any other sensor,” 94-98, 102-107, 1018 “central controller 110 may take such actions as: Shut down room and turn off lights; Have video screens with shut down signal; Reschedule all meetings for other rooms; Notify facilities/IT personnel,”]).
Claim 13: Jorasch in view of Xin and Tran discloses the information processing apparatus according to claim 12 above and Jorasch further discloses wherein the situation includes at least one of
a number of the plurality of persons ([62 “user” may include a human being, set of human beings, group of human beings, an organization, company, legal entity,” 149, 150]),
an object in a hand of the user ([174 “set of playing cards held in a character's hand (e.g., in a poker game),” 192 “Sensors 3812a and/or 3218b may be used to sense when a hand is on the mouse,” 251 “Sequences may involve keys, scroll wheels, touch pads, mouse motions, head motions (as with a headset), hand motions (e.g., as captured by a camera),”]),
a performance of the action by the user ([251 “performing the user input sequence one or more times (e.g., on the actual peripheral), or in any other fashion,” 266, 267]),
a state of biometric information of the plurality of persons ([887 “camera 9090 may be aimed at an object in front of the user, aimed at another user, aimed at the user's face (e.g., to capture distances between eyes, ears, nose and mouth for biometric calculations), aimed at one of the user's eyes (e.g., to capture an image of the user's iris for a biometric calculation),”]),
an excitement degree ([262 “level of excitement or strategy to the game. For example, one player may be able to discern or infer when another player is tense, and may factor that knowledge into a decision as to whether to press an attack or not,” 267]), or
a gesture of the user ([107 “Ultrasonic sensors may be used for range-finding, presence/proximity sensing, object detection and avoidance, position tracking, gesture tracking,”]).
Claims 14 and 17: Jorasch in view of Xin and Tran discloses the information processing apparatus according to claims 12 and 15 above and Jorasch further discloses, wherein
the space includes a display device ([92 “Output device 325 may include any component or device for outputting or conveying information, such as to a user. Output device 325 may include a display screen, speaker, light, backlight, projector, LED, touch bar, haptic actuator, or any other output device. Sensor 330 may include any component or device for receiving or detecting environmental, ambient, and/or circumstantial conditions, situations,” 96, 108]),
the display device provides a function to promote a good life in the second operation mode ([267 “improvements in a player's performance may be used to infer positive changes in health status (e.g., that the user is better rested; e.g., that the user has overcome an illness; etc.). In various embodiments, the central controller 110 may combine data on vital signs with data on player performance in order to infer health status. For example, an increased body temperature coupled with a decline in performance may serve as a signal of illness in the player,” 1006 “Stress alleviation suggestions could include: Meditation; Exercise (e.g., light yoga, stretching); Healthy snacks; Naps; Fresh air; Focus on a hobby or something of personal interest; Calming videos or photos; Positive/encouraging messages from company leadership; or any other suggestions. The central controller reviews the meetings of the knowledge worker and compares them to other knowledge workers in similar roles to see if any are getting oversubscribed. For example, if certain key subject matter experts are being asked to attend significantly more innovation meetings than other subject matter experts, the central controller can alert the management team of possible overuse. In addition, the overused subject matter expert could be alerted by the central controller to consider delegating or rebalancing work in order to maintain a healthy lifestyle,” 1007, 1008 “central controller can look to see if exercise routines are typically scheduled on an individual's calendar. If so, and suddenly they begin to not appear, the central controller can provide reminders to the individual to reconsider adding exercise routines to their calendar to maintain a healthy lifestyle,”]),
the CPU is further configured to start, based on the detection result, an output control for a space production ([43 “circuit include: a touch panel or buttons for allowing the user to input information such as operational instructions or data, therethrough, or a circuit for driving the touch panel or buttons; and a display, speaker or headphone terminal for outputting, to the user, information such as display data or audio data, therethrough, or a circuit for driving the terminal,” 65, 97, 100 “ communicating with an activity amount meter (which may be a wearable terminal) which has measured the exercise amount of the locomotion training, via the input-output I/F 102 in a wireless or wired manner. Alternatively, the terminal device 100 may be configured to accept an input of the exercise amount of the locomotion training from the user via the user I/F,” 104, 105, 125]), and
Jorasch does not explicitly disclose, however Tran discloses:
the display device transitions from a first operation mode to a second operation mode ([155, 156 “predictive model, including time series models such as those employing autoregression analysis and other standard time series methods, dynamic Bayesian networks and Continuous Time Bayesian Networks, or temporal Bayesian-network representation and reasoning methodology, is built, and then the model, in conjunction with a specific query makes target inferences,” 160-162, 163 “system allows patients to conduct a low-cost, comprehensive, real-time monitoring of their vital parameters such as ambulation and falls. Information can be viewed using an Internet-based website, a personal computer, or simply by viewing a display on the monitor. Data measured several times each day provide a relatively comprehensive data set compared to that measured during medical appointments separated by several weeks or even months. This allows both the patient and medical professional to observe trends in the data, such as a gradual increase or decrease in blood pressure, which may indicate a medical condition,”]),
the display device displays content in the first operation mode ([155, 156 “predictive model, including time series models such as those employing autoregression analysis and other standard time series methods, dynamic Bayesian networks and Continuous Time Bayesian Networks, or temporal Bayesian-network representation and reasoning methodology, is built, and then the model, in conjunction with a specific query makes target inferences,” 160-162, 163 “system allows patients to conduct a low-cost, comprehensive, real-time monitoring of their vital parameters such as ambulation and falls. Information can be viewed using an Internet-based website, a personal computer, or simply by viewing a display on the monitor. Data measured several times each day provide a relatively comprehensive data set compared to that measured during medical appointments separated by several weeks or even months. This allows both the patient and medical professional to observe trends in the data, such as a gradual increase or decrease in blood pressure, which may indicate a medical condition,”]);
the start of the output control for the space production is after the transition of the display device from the first operation mode to the second operation mode ([155, 156 “predictive model, including time series models such as those employing autoregression analysis and other standard time series methods, dynamic Bayesian networks and Continuous Time Bayesian Networks, or temporal Bayesian-network representation and reasoning methodology, is built, and then the model, in conjunction with a specific query makes target inferences,” 160-162, 163 “system allows patients to conduct a low-cost, comprehensive, real-time monitoring of their vital parameters such as ambulation and falls. Information can be viewed using an Internet-based website, a personal computer, or simply by viewing a display on the monitor. Data measured several times each day provide a relatively comprehensive data set compared to that measured during medical appointments separated by several weeks or even months. This allows both the patient and medical professional to observe trends in the data, such as a gradual increase or decrease in blood pressure, which may indicate a medical condition,”]).
Therefore it would be obvious for Jorasch wherein the display device transitions from a first operation mode to a second operation mode, the display device displays content in the first operation mode and the start of the output control for the space production is after the transition of the display device from the first operation mode to the second operation mode as per the steps of Tran in order to precisely detect and process patient movements and specifically determine movements of a wide variety of body elements and result in the determination and specificity of user related actions.
Claim 16: Jorasch in view of Xin and Tran discloses the information processing apparatus according to claim 15 above and Jorasch further discloses, wherein the CPU is further configured to grant the plurality of health points to the user after an end of the generated exercise program ([265 “game performance may be measured in terms of points scored, points scored per unit of time, opponents neutralized, levels achieved, objectives achieved, time lasted, skill level of opponents beaten, or in terms of any other factor,” 282, 519 “provide supplemental game data (e.g., number of lives left, distance to a goal, number of points earned) which can act as a second screen of information in addition to a main screen display 6360 on which a game is being played,” 532, 533, 646, 1143 “biometric data is used to establish features and/or combinations of features that can be uniquely linked or tied to an individual,” 1455 “central controller 110 may monitor a medical device associated with user 1. Exemplary medical devices may include an electrocardiogram (EKG), heart monitor, glucose monitors, scales, skin patches, ultrasounds, etc. In various embodiments, the central controller 110 may monitor data from a health or exercise monitoring device (e.g., from a Fitbit, treadmill, etc.),”]).
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Jorasch et al. (20220006813) in view of Xin et al. (CN112861624) and Tran (20160287166) and in further view of Nishimura (20190381396).
Claim 8: Jorasch discloses the information processing apparatus according to claim 7 and Jorasch does not explicitly disclose, however Nishimura discloses wherein the situation includes a degree of concentration to view content on the display device ([136 “control unit 260 determines a mini-game to improve a concentration level as an improving means and new equipment applied to the character C1 as a reward,” 137-139, 140 “degree of influence, on a sympathetic activity level, of an action of playing a mini-game to improve a degree of concentration is illustrated for each of a user A and a user B,” Figure 7]).
Therefore it would be obvious for Jorasch wherein the situation includes a degree of concentration to view content on the display device as per the steps of Nishimura in order to specifically determine levels of concentration of viewers and the use of the information to result in a more optimized determination and presentation of information to users and participants.
Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Jorasch et al. (20220006813) in view of Xin et al. (CN112861624) and Tran (20160287166) and in further view of Ochiai et al. (20210338170).
Claim 9: Jorasch in view of Xin and Tran discloses the information processing apparatus according to claim 2 above and Jorasch further discloses wherein the CPU is further configured to:
determine a total value of the plurality of health points in a specific period ([860 “user is playing a game and it is determined by AI accelerator 8060 that the user is performing poorly a signal can be sent back to user device 106a to adjust the difficulty to a more appropriate level,” 935, 1130 “usage pattern may correlate to a skill level in a game, and the central controller may utilize the inferred skill level to adjust the difficulty of a game,” 1295, 1311 “game could introduce more challenging opponents or adjust the player skill and make it more difficult to score goals. Likewise, if the players heart rate is elevated for an extended period of time, the game difficulty could be adjusted to allow for recovery of the heart and a slowing of the heat rate,” 1821 “maximum of 6 points (for example) may be allocated, with 1 point deducted from the maximum for each 10% deviation of the identified gesture from a reference gesture. In various embodiments, point allocation instructions specify that a predetermined number of points will be allocated if the identified gesture matches a reference gesture,”]);
Jorasch does not explicitly disclose however Ochiai discloses:
determine a temporal change in the total value of the plurality of health points ([114 “with regard to the walking, when the number of steps is increased by about 9800 steps/day, it is expected that the MMSE score is increased by 1 after one month. With regard to the exercise A, when the exercise time period is increased by about 612.5 minutes/day, it is expected that the MMSE score is increased by 1 after one month. With regard to the brain training A, when the training time period is increased by about 857.5 minutes/day, it is expected that the MMSE score is increased by 1 after one month. With regard to the alcohol intake, when the amount of alcohol intake is reduced by about 122.5 mL/day, it is expected that the MMSE score is increased by 1 after one month,” 115 “control part 101 determines, as a relevant preventive interventional action, a preventive interventional action whose correlation degree is a given value or more, among the one or more preventive interventional actions,” 116 “ influence degree, the coefficient derived by the multi-regression analysis may be displayed. Alternatively, a value obtained by multiplying the reciprocal of the coefficient by 1.225 may be displayed. In this case, an intervention amount expected to cause the MMSE score to be increased by 1 point is displayed, so that it becomes possible to provide an index which is easy for the user to understand,”]); and
calculate an interest degree of the user in a exercise ([46 “biological information is information which is deemed or likely to be relevant to a health degree in a given health domain of concern of the user, and measurement thereof makes it possible to objectively determine the health degree. In order to check a time-series transition indicating how the health degree changes over time, the acquisition of the biological information is made by performing time-series monitoring. The time-series monitoring means performing the acquisition. of the biological information periodically with a certain frequency; e.g., one per day, once per week, or once per month,” 83 “assuming that the aerobic exercise is determined as the preventive interventional action, and the amount of the aerobic exercise is determined as the intervention amount, it can be considered that, as the intervention amount becomes larger or becomes closer to a proper amount, the health degree is further improved. In this case, the terminal device 100 can acquire the exercise amount of the aerobic exercise as the intervention amount, by communicating with an activity amount meter (which may be a wearable terminal) which has measured the exercise amount of the aerobic exercise,”]) based on at least one of the total value of the health points in the specific period or the temporal change of the total value of the health points ([114 “with regard to the walking, when the number of steps is increased by about 9800 steps/day, it is expected that the MMSE score is increased by 1 after one month. With regard to the exercise A, when the exercise time period is increased by about 612.5 minutes/day, it is expected that the MMSE score is increased by 1 after one month. With regard to the brain training A, when the training time period is increased by about 857.5 minutes/day, it is expected that the MMSE score is increased by 1 after one month. With regard to the alcohol intake, when the amount of alcohol intake is reduced by about 122.5 mL/day, it is expected that the MMSE score is increased by 1 after one month,” 115 “control part 101 determines, as a relevant preventive interventional action, a preventive interventional action whose correlation degree is a given value or more, among the one or more preventive interventional actions,” 116 “ influence degree, the coefficient derived by the multi-regression analysis may be displayed. Alternatively, a value obtained by multiplying the reciprocal of the coefficient by 1.225 may be displayed. In this case, an intervention amount expected to cause the MMSE score to be increased by 1 point is displayed, so that it becomes possible to provide an index which is easy for the user to understand,”]).
Therefore it would be obvious for Jorasch determine a temporal change in the total value of the plurality of health points and calculate an interest degree of the user in a exercise based on at least one of the total value of the health points in the specific period or the temporal change of the total value of the health points as per the steps of Ochiai in order to specifically determine levels of concentration of viewers and the use of the information to result in a more optimized determination and presentation of information to users and participants.
Claim 10: Jorasch in view of Xin and Tran discloses the information processing apparatus according to claim 9, above and Jorasch does not explicitly disclose however Ochiai discloses wherein the CPU ([41, 123]) is further configured to determine contents of the notification based on the interest degree of the user in the exercise ([65 “control part 101 accepts an input of the intervention amount and acquires the intervention amount through communication with the device via the input-output I/F 102. In a case where the intervention amount is that of a preventive interventional action provided to the user by the terminal device 100, so as to apply an intellectual stimulus such as questionnaire, quiz, game and brain training, the control part 101 stores the amount (time period) of the preventive interventional action provided,” 83 “assuming that the aerobic exercise is determined as the preventive interventional action, and the amount of the aerobic exercise is determined as the intervention amount, it can be considered that, as the intervention amount becomes larger or becomes closer to a proper amount, the health degree is further improved. In this case, the terminal device 100 can acquire the exercise amount of the aerobic exercise as the intervention amount, by communicating with an activity amount meter (which may be a wearable terminal) which has measured the exercise amount of the aerobic exercise, via the input-output I/F 102 in a wireless or wired manner,” 96 “making an encouragement call (locomocol call) one to several times per week could raise an intervention method continuation rate and significantly improve the duration of single-leg standing in an eye-opened state,” 100 “assuming that the locomotion training is determined as the preventive interventional action, and the amount of the locomotion training is determined as the intervention amount, it can be considered that, as the intervention amount becomes larger, or becomes closer to a proper amount, the health degree is further improved,” 126 “relevant preventive interventional action and initial values of the influence degree thereof, in the embodiment subject to an individual. In this case, even in a state in which a relevant preventive interventional action optimal to an individual is not sufficiently determined, e.g., in a state immediately after the start of monitoring, it is possible to present, to the user, a generalized relevant preventive interventional action, etc., expected to be accurate on some level. Then, while the user continues to monitor the biological information and the intervention amount, the relevant preventive interventional action and the influence degree thereof will be changed to those corresponding to the user,”]).
Therefore it would be obvious for Jorasch determine contents of the notification based on the interest degree of the user in the exercise as per the steps of Ochiai in order to specifically determine levels of concentration of viewers and the use of the information to result in a more optimized determination and presentation of information to users and participants.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Jorasch et al. (20220006813) in view of Xin et al. (CN112861624) and Tran (20160287166) and in further view of Kaleal et al. (20170027528).
Claim 15: Jorasch in view of Xin and Tran discloses the information processing apparatus according to claim 1 above and Jorasch does not explicitly disclose however Kaleal discloses wherein the CPU is further configured to:
determine, based on the detection result, an exercise that the user performs ([33 “individual may provide preferences that note what type of exercises the user likes and doesn't like to perform or what type of coaching motivates the user (e.g., soft encouraging technique over a more pushy demeaning approach),” 43 “information captured by an intelligent fitness device 119 employed by the user in association with performance of a fitness routine or exercise. For example, some fitness exercises can involve usage of fitness equipment, such as exercise machines (e.g., a treadmill, an bicycle, a rowing machine, a weight machine, a balance board, etc.) or accessories (e.g., free weights, weighted balls, hula hoops, yoga blocks, bands, etc.),” 44, 45]);
generate an exercise program based on each of the determined exercise and specific information of the user ([33, 34 “responses determined for manifestation by an avatar (based on received user physical and physiological activity data) to provide a user with guidance, instruction or support with respect to performing an action, program, task or routine can be based on learned user behavior. For example, historical data regarding past reactions/responses performed by the user to avatar responses in association with same or similar routines, tasks or actions can be collected and analyzed using various machine learning techniques to determine what types of avatar responses work and don't work for the user,” 42 “physiological/biometric data, sensor devices 104 can facilitate capture and reporting of user movement or motion corresponding to speed, direction, and orientation of the user a whole and/or individual body parts of the user. For example, the one or more sensor devices 104 can include motion sensors such as an accelerometer, a gyroscope or an inertial measurement unit (IMU). Thus captured motion data can include information identifying acceleration, rotation/orientation, and/or velocity of the motion sensor device 104 itself, facilitating determination of motion and movement data of the body and/or body parts to which the motion sensor are attached,” 46, 59 “include profile information for the user that defines various known characteristics of the user, including but not limited to, health information, preferences, demographics, user schedule, and historical information gathered about the user over the course of a monitored program, routine or activity. This input can also include contextual information associated performance of the program, routine or activity, such as a location of the user, information about the location (e.g., a map of the location, physical structures at the location, events occurring at the location, etc.),” 61 “analysis of received input 234 with respect to a defined program, routine or task the user is performing, reactions are determined for manifestation by an avatar presented to the user. These reactions can include visual and/or audible (e.g., speech responses) responses that provide instruction, guidance, motivation, and evaluation for the user with respect to the user's performance (or non-performance) of the program, routine or task,”]); and
perform a process to present the generated exercise program on a display device, wherein the display device is in the space ([47 “rendering component 108 can be configured to generate a graphical user interface that includes the avatar and rendered via a display screen of the client device 106. In another example, rendering component 108 can be configured to generate an avatar as a hologram that is presented to the user,” 131 “avatar generated and presented to a user in association with performance of a fitness routine or activity can perform the movements of the fitness routine or activity. While designing a fitness routine or activity and/or prior to performing a fitness routine or activity, the user may desire to see a demonstration of one or more of the physical moves required by the activity. Preview component 408 is configured to generate an avatar that demonstrates one or more moves selected for inclusion in a fitness routine or activity prior to beginning performance of the routine or activity,” 132 “designing a custom fitness routine or activity, the user can view an avatar demonstrating a chosen move or combination of moves based on the requirements selected for the move or combination of the moves (e.g., speed, intensity, range of motion, etc.). For example, the user can select a series of yoga poses to include in a yoga routine and the select preview component 408 to view the series of yoga poses being performed by an avatar,” 133]).
Therefore it would be obvious for Jorasch a to implement a process of determining an exercise that the user intends to perform on a basis of the detection result, a process of individually generating an exercise program of the determined exercise according to information of the user, and a process of presenting the generated exercise program on a display device installed in the space as per the steps of Kaleal in order to specifically determine levels of concentration of viewers and the use of the information to result in a more optimized determination and presentation of information to users and participants.
Response to Arguments
Applicants arguments and amendments, see Remarks/Amendments submitted 21 November 2025 with respect to the rejection of claims 1-19 have been carefully considered and are addressed below.
Claim Interpretation
Applicants amendments to the claims more precisely detailing the technical functions of the invention in technical terms results on the removal of the currently in place claim interpretation under 35 USC 112(f) and sixth paragraph.
Claim Rejections - 35 USC § 101
Applicants amendments to the independent and dependent claims more precisely specifying the processing of collected data including the detection and determination of skeleton arrangements including coordinates of each part of a user results in the conclusion that the instant invention overcomes the rejection of all pending claims under the requirements of the 2019 PEG Revised Step 2A Prongs One and Two and requirements of MPEP 2106, and therefore the rejection is removed.
As a result of consideration of the instant invention under the requirements of 2019 PEG Revised Step 2A Prong One and MPEP 2106 the determination that the instant invention is directed to a judicial exception is maintained such that the judicial exception is similar to abstract ideas related to certain methods of managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions and as well directed to abstract ideas related to mental processes including concepts performed in the human mind including an observation, evaluation, judgement, and opinion.
As a result of consideration of the instant invention under the requirements of 2019 PEG Revised Step 2A Prong Two and MPEP 2106 Examiner has determined the instant invention is directed to a practical application and an improvement to system functioning. Therefore the rejection of all pending claims under the requirements of the statute is withdrawn. Applicants amendments to the claims specifically detailing the detection of skeleton related information including coordinate positions of the skeletal/body related elemental parts of patients and the systemic coordination of provided benefits is considered a technical improvement to system functioning. Examiner’s conclusion is guided by the disclosures of the written description such as paragraph [95] which details “the detection of the skeleton information, for example, each part (head, shoulder, hand, foot, and the like) of each person is recognized from the captured image, and the coordinate position of each part is calculated (acquisition of joint position). Furthermore, the detection of the skeleton information may be performed as posture estimation processing,” and the calculation of health points as detailed at paragraph [96] by the comparison of collected skeletal data as related to as recited “calculation unit 232 determines whether or not the user has performed a pre-registered “healthful behavior” on the basis of the detected skeleton information of the user, and calculates a corresponding health point in a case where the user has performed the “healthful behavior,” which the Examiner interprets to be an improvement to computational processing as well as the results of the collected data being the determination of healthy behavior as well as the provision of recommendations for healthy behavior. Further disclosures related to the processing are detailed at least at paragraphs [97]-[102], [123]-[133], [135]-[138] which details the merging of skeleton information to determine position information. Thus the instant invention is determined to be directed to a judicial exception and further directed to a practical application and therefore the rejection is withdrawn.
Claim Rejections - 35 USC § 103
Applicant’s arguments and amendments, see Remarks/Amendments, filed 21 November 2025 with respect to the rejection(s) of claim(s) 1-19 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of the combination of previously cited to references Jorasch, Nishimura, Ochiai, and Kaleal and newly identified references Xin et al. (CN112861624) and Tran (20160287166) which detail the implementation of skeletal related displays and data interpretation and the determination of a wide variety of health points as related to user actions.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant' s disclosure. See attached References Cited form 892.
See Yoshida (20240112364) for disclosures related to the detection and processing of skeletal structures of persons implemented with two dimensional and three dimensional related data processing. See at least paras.[29]-[60].
See Ishihara (20240029379) for disclosures related to the implementation of the collection of skeleton related information and the processing of the collected data and the specification of dimensional descriptions of joints. See at least paras. [41]-[73].
See Sternitzke et al (20220331028) for disclosures related to the capturing of movement sequences for persons and translating the captured information into movement sequences for the purpose of developing skeletal models. See at least paras. [135]-[186].
See Grob et al. (20220108561) for disclosures related to the capturing of movement patterns of individuals and the generation of skeletal models of the captured information representing movements of the bodily elements of the individuals. See at least paras. [52]-[90].
See Sakaue (20150294481) for disclosures related to the acquisition of motions of persons and the processing of the data and determination of rehabilitation processes. See at least paras. [50]-[84].
See Naka et al. (200100097452) for disclosures related to the collection of skeletal motions by means of body located transmitters and the processing of the data to determine motion details and skeletal information. See at least paras. [34]-[72].
See Chunlong et al. (CN 110941990 B) for disclosures related to collecting action pictures of a target main body in the human body movement process; extracting skeleton key point coordinates of the motion of the target main body according to the motion picture; the skeletal keypoint coordinates are input into a pre-trained assessment model for assessing the motion of the target subject.
See Elias et al. Understanding the Gap between 2D and 3D Skeleton-Based Action Recognition; 2019 IEEE International Symposium on Multimedia (ISM), for disclosures related to recognition of actions based upon the adoption a state-of-the-art bidirectional LSTM network to analyze the accuracy gap in the expressive power of 2D and 3D skeleton data recorded simultaneously on a high number of 20 k human actions.
See Zhu et al. Co-occurrence Feature Learning for Skeleton based Action Recognition using Regularized Deep LSTM Networks; arXiv:1603.07772v1 [cs.CV] 24 Mar 2016; for disclosures related to an endto-end fully connected deep LSTM network for skeleton based action recognition.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David Stoltenberg whose telephone number is (571) 270-3472.
The examiner can normally be reached on Monday-Friday 8:30AM to 5:00PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, Kambiz Abdi, can be reached on (571) 272-6702. The fax phone number for the organization where this application or proceeding is assigned is (571)-273-8300, or the examiner' s direct fax phone number is (571) 270 4472.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center at (866) 217-9197 (toll free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/DAVID J STOLTENBERG/Primary Examiner, Art Unit 3685