DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
1 This action is in response to the amendment filed on 1/02/2026. Claims 1, 9, and 16-20 have been amended. Claims 9-20 overcome the 101 rejection, but claims 1-20 remain rejected.
Response to Arguments
2 Regarding applicant’s arguments for claims 9-20 filed on 1/02/2026 for the rejections under 35 USC § 101, claims 16-20 have been altered to overcome the rejection, but claims 9-15 were unaltered and the applicant argued that the claims were “directed to a system that includes a processor, and as such cannot be directed to a carrier wave even if they also recite a computer-readable storage medium”. The argument has been considered and are persuasive to overcome the rejection under 35 USC § 101 for claims 9-15. Therefore claims 9-20 overcome the rejection under 35 USC § 101.
3 Applicant’s arguments with respect to claim 1, 9, and 16 filed on 1/02/2026, with respect to the rejection under 35 USC § 102 regarding that the prior art does not teach “re-configuring the array of cameras with the updated camera properties” alongside with the amended part “the array of cameras comprises physical cameras that capture images of the subject”. This argument has been considered, but are moot due to new grounds of rejection under 35 USC § 103.
4 Regarding dependent claims 2-8, 10-15, and 17-20 they directly/indirectly depend on independent claims 1, 9, and 16 respectively. Applicant does not argue anything other than independent claims 1, 9, and 16. The limitations in those claims, in conjunction with combination, was mostly previously established as explained, with a few claims being adjusted to connect with the changes of the independent claims.
Claim Rejections - 35 USC § 103
5 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
6 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7 Claim(s) 1-11, 15-17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ahmedt-Aristizabal, D., Nguyen, C., Tychsen-Smith, L., Stacey, A., Li, S., Pathikulangara, J., ... & Wang, D. (2023). Monitoring of pigmented skin lesions using 3D whole body imaging. Computer Methods and Programs in Biomedicine, 232, 107451 (hereinafter Ahmedt-Aristizabal) in view of Ramachandran et al. (US 20220148723 A1).
8 Regarding claim 1, Ahmedt-Aristizabal teaches a method comprising:
receiving an attribute of a subject of an array of cameras, wherein the array of cameras comprises physical cameras that capture images of the subject ([Page 9; Section 2.2; Table 3] reciting “Data A (3 subjects, 2 poses each), Data B (10 subjects, 1 pose each, [2 sources])”; [Page 9; Section 2.2.1] reciting “Participants undergo whole body photography excluding skin on lower body parts (lower limps), feet or scalp, by standing on the motionless wooden platform with a natural standing stance and posture for screening. Each subject was scanned in 2 poses, respectively with arms pointing downwards at an angle (Apose) and “arms downward” without an angle…”; [See also Fig. 1 and Fig. 3]);
PNG
media_image1.png
356
514
media_image1.png
Greyscale
PNG
media_image2.png
338
528
media_image2.png
Greyscale
receiving a plurality of camera properties of the array of cameras ([Page 14; Section 2.3.1] reciting “There are two conflicting parameters to compromise: i) a wider camera field of view provides a more complete 3D model and more image overlapping between nearby views provides better estimations of camera poses; ii) image resolution for accurate depth estimation for 3D reconstruction. A camera pose represents the 3D position and orientation of the camera when capturing an image. Incorrect camera poses as well as less view overlapping lead to noisy depth and a lowquality 3D reconstructed model. In practice, we need to make sure all cameras are detected and used for 3D reconstruction, and minimise the missing part of the output 3D model, particularly the head, shoulder, hands and feet.”);
generating a plurality of updated camera properties based on the plurality of camera properties and the attribute ([Page 24; Section 2.6] reciting “Selecting the active detection (either by selecting within the bounding box region of the lesion directly from the central primary image or the “Current Image Detections box”) updates the selection across all views (highlighting on the 3D model, primary image, enlarged image crop and selected current image detection and metadata). When the actively selected detection is changed, the virtual camera viewing the 3D model can optionally be moved to a fixed offset from the 3D point on the mesh surface of the model to point directly at the computed 3D points of the detection (focusing the 3D view on the actively selected detection).”);
and generating a 3D model of the subject from images captured by the array of re-configured cameras ([Page 17; Section 2.4] reciting “Given the reconstructed 3D model and camera intrinsic and extrinsic parameters, a preprocessor renders the 3D models to get the corresponding depth image to each captured image, and a mask indicating which pixels are associated with the subject for each camera view (see Fig. 12). 2D to 3D projection can then be obtained from the depth image and camera parameters.”).
9 Although Ahmedt-Aristizabal could teach re-configuring the array of cameras with the updated camera properties ([Page 24; Section 2.6] reciting “Selecting the active detection (either by selecting within the bounding box region of the lesion directly from the central primary image or the “Current Image Detections box”) updates the selection across all views (highlighting on the 3D model, primary image, enlarged image crop and selected current image detection and metadata). When the actively selected detection is changed, the virtual camera viewing the 3D model can optionally be moved to a fixed offset from the 3D point on the mesh surface of the model to point directly at the computed 3D points of the detection (focusing the 3D view on the actively selected detection).”), prior art from Ramachandran can teach this limitation further.
10 Ramachandran teaches re-configuring the array of cameras with the updated camera properties ([0048] reciting “With reference to FIG. 3, there is shown a scenario 300. In the scenario 300, there is shown a camera rig that may include one or more imaging sensors, such as a first imaging sensor 302A, a second imaging sensor 302B, and a third imaging sensor 302C.”; [0049] reciting “In accordance with an embodiment, the circuitry 202 may be configured to control the camera rig (that includes the one or more imaging sensors) to scan at least the first anatomical portion 118 of the body of the human subject 116. The camera rig may execute a 3D scan of at least the first anatomical portion of the human subject 116 to capture the physical features in detail.”);
PNG
media_image3.png
480
404
media_image3.png
Greyscale
11 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Ahmedt-Aristizabal) to incorporate the teachings of Ramachandran to provide a clearer method to re-configure the physical cameras that are taught by Ahmedt-Aristizabal, based on the various camera properties. Doing so would allow the configuration to scan the human subject to capture physical features of the human subject in detail, such as skin color, skin texture and hair on a body of the human subject as stated by Ramachandran ([0049] recited).
12 Regarding claim 2, Ahmedt-Aristizabal in view of Ramachandran teaches the method of claim 1 (see claim 1 rejection above), wherein the subject comprises a medical patient, and wherein the attribute of the subject comprises a medical diagnosis of the medical patient (Ahmedt-Aristizabal; [Page 6; Section 2.1] “During the image acquisition, the patient stands on a motionless wooden platform at the centre of the system, which is used as a reference dimension. This stand is also used to crop and scale the mesh of a patient to the right physical height. The modular acquisition system is shown in Fig. 3.”; [Page 5; Section 2] reciting “In this paper, we introduce a system called 3DSkin-mapper which performs detection, monitoring and analysis of skin lesions on the patient’s entire body.”).
13 Regarding claim 3, Ahmedt-Aristizabal in view of Ramachandran teaches the method of claim 2 (see claims 1 and 2 rejections above), wherein the medical diagnosis is inferred from an initial set of images obtained from the array of cameras (Ahmedt-Aristizabal; [Page 9-10; Section 2.2.1] reciting “Using the proposed imaging system described in Subsection 2.1, we collected images from three (3) healthy participants to test the 3DSkin-mapper workflow - 3D human reconstruction, 3D mapping of skin lesions and longitudinal tracking. These participants were recruited with the purpose of lesion screening instead of a clinical study…The lesions are initially marked on all 2D images of a single subject to aid in the evaluation and monitoring of the lesion detector...”).
14 Regarding claim 4, Ahmedt-Aristizabal in view of Ramachandran teaches the method of claim 1 (see claim 1 rejection above), wherein the updated camera properties comprise updated camera poses for the array of cameras (Ahmedt-Aristizabal; [Page 10; Section 2.2.2] reciting “3D body scan images with high-quality skin texture information are used to optimise camera poses, and test the processing pipeline.”).
15 Regarding claim 5, Ahmedt-Aristizabal in view of Ramachandran teaches the method of claim 1 (see claim 1 rejection above), wherein the camera properties comprise camera poses for the array of cameras (Ahmedt-Aristizabal; [Page 6; Fig. 2 Description] reciting “Reconstruction of the human body from different camera poses and sparse point cloud estimation.”; [Page 10; Section 2.2.2] reciting “3D body scan images with high-quality skin texture information are used to optimise camera poses, and test the processing pipeline.”).
16 Regarding claim 6, Ahmedt-Aristizabal in view of Ramachandran teaches the method of claim 1 (see claim 1 rejection above), wherein the camera properties comprise white balance or brightness (Ahmedt-Aristizabal; [Page 20; Section 2.5.1] reciting “The training dataset is enhanced by using various data augmentation techniques such as Mosaic, randomly rotating images between 0 and 15 degrees, randomly modifying the brightness, contrast, saturation, and hue of each image, and randomly flipping the image horizontally”).
17 Regarding claim 7, Ahmedt-Aristizabal in view of Ramachandran teaches the method of claim 1 (see claim 1 rejection above), wherein the camera properties comprise resolution or field of view (Ahmedt-Aristizabal; [Page 13; Section 2.3.1] reciting “In order to expedite the process, we utilised computer graphics rendering to generate synthetic images with a similar configuration and resolution as our camera rig…”).
18 Regarding claim 8, Ahmedt-Aristizabal in view of Ramachandran teaches the method of claim 1 (see claim 1 rejection above), wherein the updated camera properties are inferred from a machine learning model, wherein the camera properties are inputs to the machine learning model, and wherein the updated camera properties are generated by the machine learning model (Ramachandran; [0050] reciting “In an embodiment, the circuitry 202 (or the camera rig) may register viewpoint-specific scan data from all such viewpoints into 3D scan data.”; [0069] reciting “At 414, an input feature may be generated for the machine learning model 204A. In accordance with an embodiment, the circuitry 202 may be configured to generate the input feature for the machine learning model 204A based on the received set of bio-signals, the medical condition information associated with the human subject 116, and the anthropometric features related to the body of the human subject 116.”).
19 Regarding claim 9, Ahmedt-Aristizabal teaches a system comprising:
receive an attribute of a subject of an array of cameras ([Page 9; Section 2.2; Table 3] reciting “Data A (3 subjects, 2 poses each), Data B (10 subjects, 1 pose each, [2 sources])”; [Section 2.2.1] reciting “Participants undergo whole body photography excluding skin on lower body parts (lower limps), feet or scalp, by standing on the motionless wooden platform with a natural standing stance and posture for screening. Each subject was scanned in 2 poses, respectively with arms pointing downwards at an angle (Apose) and “arms downward” without an angle…”; [See also Fig 1]);
PNG
media_image1.png
356
514
media_image1.png
Greyscale
receiving a plurality of camera properties of the array of cameras ([Page 14; Section 2.3.1] reciting “There are two conflicting parameters to compromise: i) a wider camera field of view provides a more complete 3D model and more image overlapping between nearby views provides better estimations of camera poses; ii) image resolution for accurate depth estimation for 3D reconstruction. A camera pose represents the 3D position and orientation of the camera when capturing an image. Incorrect camera poses as well as less view overlapping lead to noisy depth and a lowquality 3D reconstructed model. In practice, we need to make sure all cameras are detected and used for 3D reconstruction, and minimise the missing part of the output 3D model, particularly the head, shoulder, hands and feet.”);
receive a plurality of updated camera properties of the array of cameras ([Page 24; Section 2.6] reciting “Selecting the active detection (either by selecting within the bounding box region of the lesion directly from the central primary image or the “Current Image Detections box”) updates the selection across all views (highlighting on the 3D model, primary image, enlarged image crop and selected current image detection and metadata). When the actively selected detection is changed, the virtual camera viewing the 3D model can optionally be moved to a fixed offset from the 3D point on the mesh surface of the model to point directly at the computed 3D points of the detection (focusing the 3D view on the actively selected detection).”), wherein the plurality of updated camera properties are set in response to viewing a 3D model of the subject ([Page 18; Section 2.5] reciting “First, lesions are detected on 2D images, and then tracked in 3D using 3D reconstruction techniques. This process allows us to estimate the camera poses, depth, and mask of each input image, which are used to accurately compute the 3D position of each detected lesion on the 3D model.”), wherein the 3D model is generated from images taken by the array of cameras ([Page 17; Section 2.4] reciting “Given the reconstructed 3D model and camera intrinsic and extrinsic parameters, a preprocessor renders the 3D models to get the corresponding depth image to each captured image, and a mask indicating which pixels are associated with the subject for each camera view (see Fig. 12). 2D to 3D projection can then be obtained from the depth image and camera parameters.”);
and train a camera property machine learning model with training data comprising a mapping of the attribute of the subject ([Page 5; Section 2] reciting “In this paper, we introduce a system called 3DSkin-mapper which performs detection, monitoring and analysis of skin lesions on the patient’s entire body. The system workflow is illustrated in Fig. 2. First, a camera rig with 60 (or more) high-resolution consumer grade cameras is used to capture 2D images of the entire body of a patient simultaneously. These images are then passed through a processing pipeline for 3D reconstruction, depth post-processing, and 2D to 3D projection. A trained deep learning model localises the lesions within the 2D images, which are then mapped back to the 3D geometry of the human body.”)
20 Ahmedt-Aristizabal does not explicitly teach a processing unit; and a computer-readable storage medium having computer-executable instructions stored thereupon, which, when executed by the processing unit, cause the processing unit to: …train a camera property machine learning model with training data comprising a mapping of the attribute of the subject and the plurality of camera properties to the updated camera properties, wherein one or more of an individual array of cameras is re-configured according to an individual updated camera property generated by the camera property machine learning model from an individual attribute of an individual subject and an individual plurality of camera properties of the individual array of cameras, wherein the individual array of cameras comprises physical cameras that capture images of the individual subject.
21 Ramachandran teaches a processing unit ([0040] reciting “The circuitry 202 may include one or more specialized processing units, which may be implemented as an integrated processor or a cluster of processors that perform the functions of the one or more specialized processing units, collectively.”); and a computer-readable storage medium having computer-executable instructions stored thereupon, which, when executed by the processing unit, cause the processing unit to ([0090] reciting “Various embodiments of the disclosure may provide a non-transitory computer-readable medium and/or storage medium having stored thereon, computer-executable instructions executable by a machine and/or a computer (for example the electronic apparatus 102). The instructions may cause the machine and/or computer (for example the electronic apparatus 102) to perform operations that include control of the first head-mounted display 104 to render the first three-dimensional (3D) model 120 of at least the first anatomical portion 118 of the body of the human subject 116.”): …train a camera property machine learning model with training data comprising a mapping of the attribute of the subject and the plurality of camera properties to the updated camera properties, wherein one or more of an individual array of cameras is re-configured according to an individual updated camera property generated by the camera property machine learning model from an individual attribute of an individual subject and an individual plurality of camera properties of the individual array of cameras, wherein the individual array of cameras comprises physical cameras that capture images of the individual subject ([0048] reciting “With reference to FIG. 3, there is shown a scenario 300. In the scenario 300, there is shown a camera rig that may include one or more imaging sensors, such as a first imaging sensor 302A, a second imaging sensor 302B, and a third imaging sensor 302C.”; [0049] reciting “In accordance with an embodiment, the circuitry 202 may be configured to control the camera rig (that includes the one or more imaging sensors) to scan at least the first anatomical portion 118 of the body of the human subject 116. The camera rig may execute a 3D scan of at least the first anatomical portion of the human subject 116 to capture the physical features in detail.”).
PNG
media_image3.png
480
404
media_image3.png
Greyscale
22 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Ahmedt-Aristizabal) to incorporate the teachings of Ramachandran to provide a clearer method to re-configure the physical cameras that are taught by Ahmedt-Aristizabal, based on the various camera properties. Doing so would allow the configuration to scan the human subject to capture physical features of the human subject in detail, such as skin color, skin texture and hair on a body of the human subject as stated by Ramachandran ([0049] recited).
23 Regarding claim 10, Ahmedt-Aristizabal in view of Ramachandran teaches the system of claim 9 (see claim 9 rejection above), wherein the plurality of updated camera properties comprises updated camera poses of the array of cameras (Ahmedt-Aristizabal; [Page 10; Section 2.2.2] reciting “3D body scan images with high-quality skin texture information are used to optimise camera poses, and test the processing pipeline.”).
24 Regarding claim 11, Ahmedt-Aristizabal in view of Ramachandran teaches the system of claim 9 (see claim 9 rejection above), wherein the array of cameras comprises an array of depth cameras (Ahmedt-Aristizabal; [Page 18; Section 2.5] reciting “This process allows us to estimate the camera poses, depth, and mask of each input image, which are used to accurately compute the 3D position of each detected lesion on the 3D model. This information is then used for thorough data documentation and longitudinal monitoring.”).
25 Regarding claim 15, Ahmedt-Aristizabal in view of Ramachandran teaches the system of claim 9 (see claim 9 rejection above), wherein the attribute of the subject comprises physical characteristics about the subject (Ahmedt-Aristizabal; [Page 16; Section 2.3.3] reciting “The 3D origin is on the ground between the feet, the ground normal represents the y-axis, and the height of the body matches the physical height of the person.”).
26 Regarding claim 16, Ahmedt-Aristizabal teaches
receive an attribute of a subject of an array of cameras, wherein the array of cameras comprises physical cameras that capture images of the subject ([Page 9; Section 2.2; Table 3] reciting “Data A (3 subjects, 2 poses each), Data B (10 subjects, 1 pose each, [2 sources])”; [Page 9; Section 2.2.1] reciting “Participants undergo whole body photography excluding skin on lower body parts (lower limps), feet or scalp, by standing on the motionless wooden platform with a natural standing stance and posture for screening. Each subject was scanned in 2 poses, respectively with arms pointing downwards at an angle (Apose) and “arms downward” without an angle…”; [See also Fig. 1 and Fig. 3]);
PNG
media_image1.png
356
514
media_image1.png
Greyscale
PNG
media_image2.png
338
528
media_image2.png
Greyscale
receive a plurality of camera properties of the array of cameras ([Page 14; Section 2.3.1] reciting “There are two conflicting parameters to compromise: i) a wider camera field of view provides a more complete 3D model and more image overlapping between nearby views provides better estimations of camera poses; ii) image resolution for accurate depth estimation for 3D reconstruction. A camera pose represents the 3D position and orientation of the camera when capturing an image. Incorrect camera poses as well as less view overlapping lead to noisy depth and a lowquality 3D reconstructed model. In practice, we need to make sure all cameras are detected and used for 3D reconstruction, and minimise the missing part of the output 3D model, particularly the head, shoulder, hands and feet.”);
generate a plurality of updated camera properties based on the plurality of camera properties and the attribute ([Page 24; Section 2.6] reciting “Selecting the active detection (either by selecting within the bounding box region of the lesion directly from the central primary image or the “Current Image Detections box”) updates the selection across all views (highlighting on the 3D model, primary image, enlarged image crop and selected current image detection and metadata). When the actively selected detection is changed, the virtual camera viewing the 3D model can optionally be moved to a fixed offset from the 3D point on the mesh surface of the model to point directly at the computed 3D points of the detection (focusing the 3D view on the actively selected detection).”);
and generate a 3D model of the subject from images captured by the array of re-configured cameras ([Page 17; Section 2.4] reciting “Given the reconstructed 3D model and camera intrinsic and extrinsic parameters, a preprocessor renders the 3D models to get the corresponding depth image to each captured image, and a mask indicating which pixels are associated with the subject for each camera view (see Fig. 12). 2D to 3D projection can then be obtained from the depth image and camera parameters.”).
27 Ahmedt-Aristizabal does not explicitly teach a non-transitory computer-readable storage medium having encoded thereon computer-readable instructions that when executed by a processing unit causes a system to… and although it could teach to re-configure the array of cameras with the updated camera properties ([Page 24; Section 2.6] reciting “Selecting the active detection (either by selecting within the bounding box region of the lesion directly from the central primary image or the “Current Image Detections box”) updates the selection across all views (highlighting on the 3D model, primary image, enlarged image crop and selected current image detection and metadata). When the actively selected detection is changed, the virtual camera viewing the 3D model can optionally be moved to a fixed offset from the 3D point on the mesh surface of the model to point directly at the computed 3D points of the detection (focusing the 3D view on the actively selected detection).”), prior art from Ramachandran can teach this limitation further.
28 Ramachandran teaches a non-transitory computer-readable storage medium having encoded thereon computer-readable instructions that when executed by a processing unit causes a system to (Ramachandran; [0050] reciting “In an embodiment, the circuitry 202 (or the camera rig) may register viewpoint-specific scan data from all such viewpoints into 3D scan data.”; [0069] reciting “At 414, an input feature may be generated for the machine learning model 204A. In accordance with an embodiment, the circuitry 202 may be configured to generate the input feature for the machine learning model 204A based on the received set of bio-signals, the medical condition information associated with the human subject 116, and the anthropometric features related to the body of the human subject 116.”)…re-configuring the array of cameras with the updated camera properties ([0048] reciting “With reference to FIG. 3, there is shown a scenario 300. In the scenario 300, there is shown a camera rig that may include one or more imaging sensors, such as a first imaging sensor 302A, a second imaging sensor 302B, and a third imaging sensor 302C.”; [0049] reciting “In accordance with an embodiment, the circuitry 202 may be configured to control the camera rig (that includes the one or more imaging sensors) to scan at least the first anatomical portion 118 of the body of the human subject 116. The camera rig may execute a 3D scan of at least the first anatomical portion of the human subject 116 to capture the physical features in detail.”);
PNG
media_image3.png
480
404
media_image3.png
Greyscale
29 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Ahmedt-Aristizabal) to incorporate the teachings of Ramachandran to provide to provide some type of storage that can include instructions for the methods taught by Ahmedt-Aristizabal, as well as to provide a clearer method to re-configure the physical cameras that are taught by Ahmedt-Aristizabal, based on the various camera properties. Doing so would allow the configuration to scan the human subject to capture physical features of the human subject in detail, such as skin color, skin texture and hair on a body of the human subject as stated by Ramachandran ([0049] recited).
30 Regarding claim 17, Ahmedt-Aristizabal in view of Ramachandran teaches the non-transitory computer-readable storage medium of claim 16 (see claim 16 rejection above), wherein the attribute of the subject comprises a body part of the subject (Ahmedt-Aristizabal; [Page 14; Section 2.3.2] reciting “As the depth of focus is finite and therefore only part of the object is in focus, the focus of the camera lens needs to be adjusted to maximise the image sharpness of the body part to be reconstructed.”).
31 Regarding claim 20, Ahmedt-Aristizabal in view of Ramachandran teaches the non-transitory computer-readable storage medium of claim 16 (see claim 16 rejection above), wherein the updated camera properties are inferred from a machine learning model, wherein the camera properties are inputs to the machine learning model, and wherein the updated camera properties are generated by the machine learning model (Ramachandran; [0050] reciting “In an embodiment, the circuitry 202 (or the camera rig) may register viewpoint-specific scan data from all such viewpoints into 3D scan data.”; [0069] reciting “At 414, an input feature may be generated for the machine learning model 204A. In accordance with an embodiment, the circuitry 202 may be configured to generate the input feature for the machine learning model 204A based on the received set of bio-signals, the medical condition information associated with the human subject 116, and the anthropometric features related to the body of the human subject 116.”).
32 Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ahmedt-Aristizabal, D., Nguyen, C., Tychsen-Smith, L., Stacey, A., Li, S., Pathikulangara, J., ... & Wang, D. (2023). Monitoring of pigmented skin lesions using 3D whole body imaging. Computer Methods and Programs in Biomedicine, 232, 107451 (hereinafter Ahmedt-Aristizabal) in view of Ramachandran et al (US 20220148723 A1) as of claims 9 and 11, further in view of Latif et al. (US 11829959 B1).
33 Regarding claim 12, Ahmedt-Aristizabal in view of Ramachandran teaches the system of claim 11 (see claims 9 and 11 rejections above), wherein the user comprises a medical practitioner (Ahmedt-Aristizabal; [Page 24; Section 2.6] reciting “The UI provides an alternative but effective way to present skin information to doctors. It allows doctors to see skin lesions virtually in their 3D positions, and easily track them over time.”), and wherein the updated camera properties adjust an infra-red illumination level of a depth sensor of the array of depth cameras.
34 Ahmedt-Aristizabal in view of Ramachandran does not explicitly teach wherein the updated camera properties adjust an infra-red illumination level of a depth sensor of the array of depth cameras.
35 Latif teaches wherein the updated camera properties adjust an infra-red illumination level of a depth sensor of the array of depth cameras ([Page 14; Column 6, Lines 2-8] “In examples, the video camera 114 may be an RGB camera, a three-dimensional (3D) camera, a time of flight (TOF) camera, an infrared camera, an event-based camera, and the like. In some examples, the UAV 102 may have more than one camera or an array of cameras for generating stereoscopic images and/or enhanced video stream quality.”; [Page 14; Column 6; Lines 15-21] reciting “In some examples, the UAV 102 may comprise additional peripheral components such as mirrors positioned at defined angles with respect to the video camera 114 to aid in improved depth perception of the potholes and/or to generate a 3D view of a captured road surface, for example, to determine a depth of a pothole from a plurality of angles for better and enhanced perception.”).).
36 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Ahmedt-Aristizabal in view of Ramachandran) to incorporate the teachings of Latif to provide a method that allows the camera properties taught by Ahmedt-Aristizabal in view of Ramachandran to have a type of infra-red illumination of a depth sensor using the depth-like cameras also taught by Ahmedt-Aristizabal in view of Ramachandran. Doing so would include various features like brightness enhancer, a contrast enhancer, a color enhancer, a hue enhancer and/or the like as taught by Latif ([Page 14; Column 6, Lines 23-24] recited).
37 Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ahmedt-Aristizabal, D., Nguyen, C., Tychsen-Smith, L., Stacey, A., Li, S., Pathikulangara, J., ... & Wang, D. (2023). Monitoring of pigmented skin lesions using 3D whole body imaging. Computer Methods and Programs in Biomedicine, 232, 107451 (hereinafter Ahmedt-Aristizabal) in view of Ramachandran et al (US 20220148723 A1) as of claim 9, further in view of Wu et al. (US 20220036562 A1).
38 Regarding claim 13, Ahmedt-Aristizabal in view of Ramachandran teaches the system of claim 9 (see claim 9 rejection above), but does not explicitly teach wherein the training data comprises an environment boundary.
39 Wu teaches wherein the training data comprises an environment boundary ([0015] reciting “The neural network after training performs real-time image semantic segmentation on the collected video images based on the extracted features of the working area, thereby perceiving the environment and identifying the boundaries of the working area.”).
40 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Ahmedt-Aristizabal in view of Ramachandran) to incorporate the teachings of Wu to provide an environment boundary when using images from the array of cameras taught by Ahmedt-Aristizabal in view of Ramachandran. Doing so would allow proper forming of the training set as stated by Wu ([0016] recited).
41 Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ahmedt-Aristizabal, D., Nguyen, C., Tychsen-Smith, L., Stacey, A., Li, S., Pathikulangara, J., ... & Wang, D. (2023). Monitoring of pigmented skin lesions using 3D whole body imaging. Computer Methods and Programs in Biomedicine, 232, 107451 (hereinafter Ahmedt-Aristizabal) in view of Ramachandran et al (US 20220148723 A1) as of claim 9, further in view of Eleftherou et al. (US 11694800 B2).
42 Regarding claim 14, Ahmedt-Aristizabal in view of Ramachandran teaches the system of claim 9 (see claim 9 rejection above), but does not explicitly teach wherein the attribute of the subject comprises medical history data of the subject.
43 Eleftherou teaches wherein the attribute of the subject comprises medical history data of the subject ([Abstract] reciting “…a data model describing a patient, wherein: the data comprises medical history data of the patient and observation data received by the computing system during observation of at least one medical professional, the observation data comprises: (i) audio data recorded by a microphone of the computing system, and (ii) image data captured by a camera of the computing system…”).
44 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Ahmedt-Aristizabal in view of Ramachandran) to incorporate the teachings of Eleftherou to provide a method that obtains the medical data from the subject or patient from the teachings of Ahmedt-Aristizabal in view of Ramachandran, while still utilizing a type of camera(s). Doing so would reduce an uncertainty value of a data model as stated by Eleftherou ([Abstract] recited).
45 Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ahmedt-Aristizabal, D., Nguyen, C., Tychsen-Smith, L., Stacey, A., Li, S., Pathikulangara, J., ... & Wang, D. (2023). Monitoring of pigmented skin lesions using 3D whole body imaging. Computer Methods and Programs in Biomedicine, 232, 107451 (hereinafter Ahmedt-Aristizabal) in view of Ramachandran et al. (US 20220148723 A1) as of claim 16, further in view of Grupp Jr. et al. (US 20230196595 A1).
46 Regarding claim 18, Ahmedt-Aristizabal in view of Ramachandran teaches the non-transitory computer-readable storage medium of claim 16 (see claims 1 and 16 rejections above), and although could teach wherein the array of cameras are re-configured in real time in response to changes to the attribute of the subject (Ramachandran; [0080] reciting “…the second head-mounted display 502 may enable the human subject 116 to view what the user 114 may be viewing through the first head-mounted display 104, as well as the hand-movement (i.e. through the second 3D model 504) of the user 114 in real-time or near real-time.”; [0103] reciting “In accordance with an embodiment, the circuitry 202 may be further configured to control a camera rig comprising one or more imaging sensors (such as the first imaging sensor 302A, the second imaging sensor 302B and the third imaging sensor 302C) to scan at least the first anatomical portion 118 of the body of the human subject 116. The circuitry 202 may further receive 3D scan data of at least the first anatomical portion 118 of the body of the human subject 116 based on the scan. The circuitry 202 may further control the first head-mounted display 104 to render the first 3D model 120 of at least the first anatomical portion 118 of the body based on the received 3D scan data.”), the prior art from Grupp Jr. can teach this limitation further.
47 Grupp Jr. teaches wherein the array of cameras are re-configured in real time in response to changes to the attribute of the subject ([Abstract] reciting “Medical imaging systems, methods, and devices are disclosed herein. In some embodiments, an imaging system includes (i) a camera array configured to capture intraoperative image data of a surgical scene in substantially real-time and (ii) a processing device communicatively coupled to the camera array…The imaging system is further configured to receive and/or store initial image data, such as medical scan data corresponding to a portion of a patient in the scene.”; [0037] reciting “The virtual camera perspective is controlled by an input controller 106 that can update the virtual camera perspective based on user driven changes to the camera's position and rotation.”; [0052] reciting “In some embodiments, the position and/or shape of an object within the scene 108 may change over time.”).
48 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Ahmedt-Aristizabal in view of Ramachandran) to incorporate the teachings of Grupp Jr. to provide a clearer method to reconfigure the array of cameras taught by Ahmedt-Aristizabal in view of Ramachandran in response to a change from a subject or patient from the pictures of the cameras. Doing so would allow the method to include periodically or continuously reregistering the initial image data to the intraoperative image to account for intraoperative movement as stated by Grupp Jr. ([0052] recited).
49 Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ahmedt-Aristizabal, D., Nguyen, C., Tychsen-Smith, L., Stacey, A., Li, S., Pathikulangara, J., ... & Wang, D. (2023). Monitoring of pigmented skin lesions using 3D whole body imaging. Computer Methods and Programs in Biomedicine, 232, 107451 (hereinafter Ahmedt-Aristizabal) in view of Ramachandran et al. (US 20220148723 A1) as of claim 16, further in view of McCrackin et al. (US 20220341854 A1).
50 Regarding claim 19, Ahmedt-Aristizabal in view of Ramachandran teaches the non-transitory computer-readable storage medium of claim 16 (see claim 16 rejection above), but does not explicitly teach wherein the subject comprises an object being inspected for sale or for repair.
51 McCrackin teaches wherein the subject comprises an object being inspected for sale or for repair ([0034] reciting “Inspection devices are commonly used in order to detect features of interest, such as erosion of a component, within industrial machines. As an example, an inspection device can include a camera that takes pictures of a target portion of a machine, and these pictures can be manually analyzed to detect erosion. Following erosion detection, preventative maintenance (e.g., repair, replacement, etc.) can be performed…”).
52 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Ahmedt-Aristizabal in view of Ramachandran) to incorporate the teachings of McCrackin to provide a method that can use the cameras that are taught by Ahmedt-Aristizabal in view of Ramachandran to inspect objects that are in need of repair. Doing so would allow the development to be before reduced performance, shutdown, and catastrophic failure as stated by McCrackin ([0034] recited).
Conclusion
53 Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
54 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY TRAN LE whose telephone number is (571)272-5680. The examiner can normally be reached Mon-Thu: 7:30am-5pm; First Fridays Off; Second Fridays: 7:30am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHNNY T LE/ Examiner, Art Unit 2614
/KENT W CHANG/ Supervisory Patent Examiner, Art Unit 2614