Prosecution Insights
Last updated: April 19, 2026
Application No. 18/000,054

A PHOTOBIOMODULATION THERAPY LOW-LEVEL LASER TARGETING SYSTEM

Non-Final OA §102§103§112
Filed
Nov 28, 2022
Examiner
HUH, VYNN V
Art Unit
3792
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Cosmetic Edge Pty Ltd.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
168 granted / 269 resolved
-7.5% vs TC avg
Strong +45% interview lift
Without
With
+44.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
41 currently pending
Career history
310
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
24.3%
-15.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 269 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status: Claims 1-36 are pending. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7, 10, 25, 33, and 34 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Re Claim 7, the limitation “wherein the beamforming lens may form a pinpoint for XY raster scanning” is indefinite, because of the claim language “may”. It is unclear whether the beamforming lens is configured to form a pinpoint for XY raster scanning. Re Claim 10, Claim 10 is indefinite, because it depends on claim 10 itself. Re Claim 25, the limitation “skin marking” is indefinite, because it is unclear whether it is referring to “a skin marking” in claim 19 or a different one. Re Claims 33 and 34, the limitation “the electronic device” is indefinite, because it lacks antecedent basis. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 6-32, 35, and 36 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dijkstra et al. (US 2019/0030359A1). Re Claim 1, Dijkstra discloses a photobiomodulation therapy low-level laser targeting system comprising: a controller (para. [0015], a controlling unit and the controlling unit is operably connected to the camera unit, the display unit, the memory unit, and the light projection unit); a low lever laser emitter controlled by the controller (para. [0031], the light therapy device uses Photo-bio-modulation Therapy (PBMT) known as Low-Level Laser Therapy (LLLT). LLLT is used to improve tissue repair, reduce pain and inflammation wherever the beam is applied on the application surface; para. [0033], the light therapy device provides a LED light-based treatment which is a painless, relaxing, and non-invasive skin-care treatment, that has multiple benefits particularly stimulating collagen and requires no downtime. Further, the LED treatments work by using an array of bright light-emitting diodes that send low-level light energy into the deeper layers of the skin; para. [0015], a controlling unit and the controlling unit is operably connected to the camera unit, the display unit, the memory unit, and the light projection unit); and a projector operably coupled to the emitter and controlled by the controller to control the projection direction of light from the emitter (para. [0015], a light projection unit, and the controlling unit is operably connected to the camera unit, the display unit, the memory unit, and the light projection unit. The controlling unit includes a processor which is adapted to execute computer implemented code and software applications stored in the memory unit to perform various functions and control the functionality of the camera unit, the display unit, and the light projection unit; para. [0026], the light therapy device allows the user to select a type of treatment, treatment time, intensity of light projection and shape of a light projection based on his/her body condition; fig. 1, light projection unit 110), wherein: the controller comprises a targeting controller configured for controlling the projector to project light from the emitter onto a skin surface target area in use to target a subdermal target region (fig. 1, light direction controller 112, para. [0061], The light projection unit 110 is having a light head 111, a light direction controller 112, and a light source 113. In this, the light source 113 projects a light on the light direction controller 112 which is able to control and divert the light as per the controlling unit 130 input. Further, the light direction controller 112 uses saccade mirror, direction tuning film, laser sintering, or mirror type galvanometer etc. techniques for controlling a direction of a light emitted from the light source 113), and the controller is configured with geospatial data representing the subdermal target region and wherein the targeting controller is configured for controlling the projector depending on relative positioning of the projector with respect to the skin surface target area and the geospatial data (para. [0086], the light therapy device is having a calibration protocol to identify the 3D co-ordinates (X, Y, and Z) of the portion of the user body to be treated. Further, the calibration protocol uses a 3D (3-dimensional) object detection method to extract the 3D co-ordinates (X, Y, and Z) of the treatment portion. Based upon the 3D co-ordinates the light direction controller 112 projects the light 114 on the treatment portion and follows the motion of the treatment portion of the user to a predefined distance; para. [0064], The light direction controller 112 follows a treatment portion of a user body to an extent by changing a direction of the light projection. Further, once the treatment portion starts moving out of reach of the light direction controller 112 the rotation head 250 that rotates on the spindle 230 with the help of a motor keeps the light projection intact on the treatment portion of the user body regardless of the movement of the user body or the treatment portion.). Re Claim 2, Dijkstra discloses that the projector directs the light in two axes (para. [0027], the light projection unit, the camera unit, the memory unit, and the controlling unit are configured to mount on a rotatable head, wherein the rotatable head synchronously rotates in a direction of movement of at least one treatment portion of the user body. Further, a rotation unit comprises a motor and a spindle connected to the rotation head for rotating the rotation head in all directions. Furthermore, the light projected from the light head continuously follows the at least one treatment portion of the user body within a predefined distance.). Re Claim 3, Dijkstra discloses that the projector comprises a mechanical gimbal which controls the orientation of the emitter (para. [0027], the light projection unit, the camera unit, the memory unit, and the controlling unit are configured to mount on a rotatable head, wherein the rotatable head synchronously rotates in a direction of movement of at least one treatment portion of the user body. Further, a rotation unit comprises a motor and a spindle connected to the rotation head for rotating the rotation head in all directions. Furthermore, the light projected from the light head continuously follows the at least one treatment portion of the user body within a predefined distance.). Re Claim 4, Dijkstra discloses that the projector comprises a mechanical gimbal which adjusts a mirror or prism against or through which the light is reflected or propagated (para. [0020], the light direction controller uses techniques like saccade mirror, direction tuning film, laser sintering, or mirror type galvanometer etc. for controlling a direction of the light emitted from the light source; para. [0087], the mirror galvanometer (not shown) acts as the light direction controller 112. A light from the light source 113 is made to fall on at least one mirror of the galvanometer and the controlling unit provides input to the galvanometer, based on the controlling unit 130 input the coil of the galvanometer rotates and projects the light in the desired direction through the light head 111.). Re Claim 6, Dijkstra discloses that the projector comprises a beamforming lens (para. [0076], shape of light projection; para. [0108], the light therapy device adjusts focus of the light projection in case the light rays scattering out of the treatment portion (towards sensitive areas e.g. eyes) and making user uncomfortable, the light therapy device by analyzing the refractive index of the sensitive areas adjusts the focus of the light and attenuates the light rays to go beyond the treatment portion. – The disclosure of adjustment of focus of light reads on “beamforming lens”). Re Claim 7, Dijkstra discloses that the beamforming lens may form a pinpoint for XY raster scanning (para. [0027], a rotation unit comprises a motor and a spindle connected to the rotation head for rotating the rotation head in all directions. Furthermore, the light projected from the light head continuously follows the at least one treatment portion of the user body within a predefined distance; para. [0061], the light direction controller 112 uses saccade mirror, direction tuning film, laser sintering, or mirror type galvanometer etc. techniques for controlling a direction of a light emitted from the light source 113; para. [0087], A light from the light source 113 is made to fall on at least one mirror of the galvanometer and the controlling unit provides input to the galvanometer, based on the controlling unit 130 input the coil of the galvanometer rotates and projects the light in the desired direction through the light head 111; para. [0079], In FIG. 4A, the light therapy device 200 identifies a treatment portion 420 on a forehead area of the user 400 automatically by using an artificial intelligence module, the machine learning module, the object detection module, the localization module and the image processing module or the user 400 manually selects the treatment portion 420 via the display unit 160 and projecting a light 114 on the treatment portion 420.). Re Claim 8, Dijkstra discloses that the beamforming lens forms a line which is swept across the skin surface targeted treatment area (para. [0027], a rotation unit comprises a motor and a spindle connected to the rotation head for rotating the rotation head in all directions. Furthermore, the light projected from the light head continuously follows the at least one treatment portion of the user body within a predefined distance; para. [0061], the light direction controller 112 uses saccade mirror, direction tuning film, laser sintering, or mirror type galvanometer etc. techniques for controlling a direction of a light emitted from the light source 113; para. [0087], A light from the light source 113 is made to fall on at least one mirror of the galvanometer and the controlling unit provides input to the galvanometer, based on the controlling unit 130 input the coil of the galvanometer rotates and projects the light in the desired direction through the light head 111; para. [0079], In FIG. 4A, the light therapy device 200 identifies a treatment portion 420 on a forehead area of the user 400 automatically by using an artificial intelligence module, the machine learning module, the object detection module, the localization module and the image processing module or the user 400 manually selects the treatment portion 420 via the display unit 160 and projecting a light 114 on the treatment portion 420.). Re Claim 9, Dijkstra discloses that the projector is set at a preconfigured position with respect to the subdermal target region (para. [0089], the controlling unit 130 does not allow the light projection to go beyond the treatment portion or a predefined distance; para. [0027], the light projected from the light head continuously follows the at least one treatment portion of the user body within a predefined distance). Re Claim 10, Dijkstra disclose that the controller is configured with relative positional coordinates representing a relative position of the projector with respect to the subdermal target region (para. [0099], the master device identifies the location and position of the user and allocates separate treatment portion to each slave light therapy device by using artificial intelligence, machine learning, object detection and localization module or the user may manually select, identify, locate and treat the separate treatment portion for each light therapy device; para. [0086], the light therapy device is having a calibration protocol to identify the 3D co-ordinates (X, Y, and Z) of the portion of the user body to be treated. Further, the calibration protocol uses a 3D (3-dimensional) object detection method to extract the 3D co-ordinates (X, Y, and Z) of the treatment portion. Based upon the 3D co-ordinates the light direction controller 112 projects the light 114 on the treatment portion and follows the motion of the treatment portion of the user to a predefined distance.). Re Claim 11, Dijkstra discloses that the controller comprises a data interface for receiving geospatial data obtained from at least one of a medical scanning devices and procedures comprising at least one of a CT scanner, CAT- scanner, MRI scanner, colonoscopy, endoscopy, x-ray scanner, mammogram and ultrasound investigation (para. [0065], [0068], the camera 121 is selected from a group of a normal optical camera, a thermographic camera, an infrared spectroscopy camera, an IP camera and a combination thereof. Further, the at least one sensor is selected from group of non-contact type sensors including radiation detectors, optical pyrometers, fiber optic temperature sensors, IR sensors, temperature sensors, thermal imaging sensor, ultrasonic, and infrared sensors etc.; para. [0016], The camera unit is scanning a user body by using a camera and capturing an image data of a treatment portion of the user body. The camera unit is having sensors to collect various body parameter of the user body i.e. temperature of the user body, type of body condition (external body condition or internal body condition), blood flow and other parameter; para. [0072], The camera unit 120 starts scanning the at least one portion of the user body, at step 310, the at least one portion of the user body includes a face, hand, head, leg, chest, stomach, genital area etc. or full body of the user. The camera unit 120 having the camera 121 is adapted to capture an image data of the at least one portion of the user body and the at least one sensor 122 of the camera unit 120 is collecting at least one body parameter related to the image data, the at least one body parameter includes heart rate, blood flow, temperature and other parameters related to image data.). Re Claim 12, Dijkstra discloses a computer aided modelling geospatial editor for editing the geospatial data with reference to a 3D patient model (para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition. Further, the controlling unit also utilizes the artificial intelligence module, machine learning module, localization module, object detection module and image processing module to automatically selecting a treatment based on the body condition of the user and automatically projecting a light based on the body condition without or with minimum user intervention; para. [0069], the localization module provides a complete visual module to localize objects using the camera. The localization module requires a preprocessing algorithm to segment a scene into objects. Ideally, the preprocessing algorithm should be able to segment an unstructured scene into objects using visual cues such as shape, texture, edges, and color in real-time. In the lack of a preprocessing algorithm which can satisfy all these constraints for a completely unstructured environment, one is forced to put some structure into the environment to make the detection and segmentation of objects easier; para. [0070], The Object detection module provides a way to identify specifically trained objects within the current image. Once the module is trained with sample template images it will identify those objects within the current image depending on the filtered parameters of confidence, size, rotation, etc.). Re Claim 13, Dijkstra discloses that an incident point on the skin surface target area is controlled according to a penetration depth depending on relative positioning of the projector and the subdermal target region (para. [0027], the light projected from the light head continuously follows the at least one treatment portion of the user body within a predefined distance; para. [0086], the light therapy device is having a calibration protocol to identify the 3D co-ordinates (X, Y, and Z) of the portion of the user body to be treated. Further, the calibration protocol uses a 3D (3-dimensional) object detection method to extract the 3D co-ordinates (X, Y, and Z) of the treatment portion. Based upon the 3D co-ordinates the light direction controller 112 projects the light 114 on the treatment portion and follows the motion of the treatment portion of the user to a predefined distance.). Re Claim 14, Dijkstra discloses a ranging controller operably coupled to a sensor for determining a target region and wherein the targeting controller controls the projector according to the target region determined by the ranging controller (para. [0065], the camera unit 120 is having a camera 121 and at least one sensor 122. Wherein, the camera 121 is selected from a group of a normal optical camera, a thermographic camera, an infrared spectroscopy camera, an IP camera and a combination thereof; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition. Further, the controlling unit also utilizes the artificial intelligence module, machine learning module, localization module, object detection module and image processing module to automatically selecting a treatment based on the body condition of the user and automatically projecting a light based on the body condition without or with minimum user intervention.). Re Claim 15, Dijkstra discloses that the sensor comprises a thermal sensor configured for determining a skin surface heat map topography (para. [0065], Wherein, the camera 121 is selected from a group of a normal optical camera, a thermographic camera, an infrared spectroscopy camera, an IP camera and a combination thereof. Further, the at least one sensor is selected from group of non-contact type sensors including radiation detectors, optical pyrometers, fiber optic temperature sensors, IR sensors, temperature sensors, thermal imaging sensor, ultrasonic, and infrared sensors etc.). Re Claim 16, Dijkstra discloses that the targeting controller is configured for targeting areas of the surface heat map topography exceeding a temperature threshold (para. [0127], the light therapy device uses a thermographic camera. the thermographic camera forms an image using infrared radiation, similar to a common camera that forms an image using visible light. The detector elements create a very detailed temperature pattern called a Thermogram. Thermal imaging cameras take measuring temperature to the next level, instead of getting a number for the temperature it shows a picture displaying the temperature differences of a surface; para. [0128], the thermographic camera can easily identify the portions of the user body having different body temperature. This will allow the light therapy device to identify the areas with increased blood flow (Generally the areas with increased blood are mostly inflammations) and the light therapy device is able to provide treatment based on the blood flow or temperature of the body portion. The use of thermographic camera further allows the light therapy device to identify the areas which do not have sufficient blood flow the light therapy device by using the infrared or red light improves the rate of blood flow in these areas. Further, based on the temperature of different portions of the body the light therapy device can analyze the various body disorder easily and can provide treatment according to the body condition or disorder.) Re Claim 17, Dijkstra discloses that the thermal sensor comprises an infrared camera (para. [0065], Wherein, the camera 121 is selected from a group of a normal optical camera, a thermographic camera, an infrared spectroscopy camera, an IP camera and a combination thereof. Further, the at least one sensor is selected from group of non-contact type sensors including radiation detectors, optical pyrometers, fiber optic temperature sensors, IR sensors, temperature sensors, thermal imaging sensor, ultrasonic, and infrared sensors etc.). Re Claim 18, Dijkstra discloses that the thermal sensor comprises an infrared temperature sensor which emits an infrared energy beam focused by a lens to a surface of the skin surface target area (para. [0127], the light therapy device uses a thermographic camera. the thermographic camera forms an image using infrared radiation, similar to a common camera that forms an image using visible light. Instead of the 400-700-nanometer range of the visible light camera, infrared cameras operate in wavelengths as long as 1600 nm. A special lens focuses the infrared light emitted by all of the objects in view. The focused light is scanned by a phased array of infrared-detector elements. The detector elements create a very detailed temperature pattern called a Thermogram. Thermal imaging cameras take measuring temperature to the next level, instead of getting a number for the temperature it shows a picture displaying the temperature differences of a surface.). Re Claim 19, Dijkstra discloses that the sensor comprises a vision sensor configured for identifying a skin marking (para. [0022], the external body condition is skin disorder, skin condition, skin cancer, aged skin, dead skin, sun tanning, wounds, allergy, inflammation, dermatitis, hives, marks, acne, redness, irritants, itching, swelling, sebaceous, lesions pimples, or wrinkles etc.; para. [0023], the internal body condition is blood flow, fever, erectile dysfunction, joints pain, muscles pain, grey hair, hair fall, eyebrows, cellular improvement, sleep disorder, jaundice, cancer, or any other internal body problem; para. [0016], The camera unit is scanning a user body by using a camera and capturing an image data of a treatment portion of the user body. The controlling unit by utilizing an artificial intelligence module, localization module, image processing module, localization module and object detection module to identify the image data and body parameters of the user body. The controlling unit retrieves the pre-stored data from the memory unit and compare the image data and the body parameters with the pre-stored data, based on the comparison the controlling unit identifies a treatment portion from the image data and also identifies an external body condition and/or an internal body condition from the identified treatment portion; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition; para. [0069], Localization Module: the localization module provides a complete visual module to localize objects using the camera. The localization module requires a preprocessing algorithm to segment a scene into objects. Ideally, the preprocessing algorithm should be able to segment an unstructured scene into objects using visual cues such as shape, texture, edges, and color in real-time.; para. [0070], Object detection Module: The Object detection module provides a way to identify specifically trained objects within the current image. Once the module is trained with sample template images it will identify those objects within the current image depending on the filtered parameters of confidence, size, rotation, etc.; para. [0085], the light therapy device 200 automatically scan, identify, select, prioritize and treat the at least one treatment portion with the help of the artificial intelligence, machine learning, object detection, image processing module and the localization module; para. [0086], the light therapy device is having a calibration protocol to identify the 3D co-ordinates (X, Y, and Z) of the portion of the user body to be treated. Further, the calibration protocol uses a 3D (3-dimensional) object detection method to extract the 3D co-ordinates (X, Y, and Z) of the treatment portion. Based upon the 3D co-ordinates the light direction controller 112 projects the light 114 on the treatment portion and follows the motion of the treatment portion of the user to a predefined distance.). Re Claim 20, Dijkstra discloses that the skin marking is a point and wherein the targeting controller is configured for targeting a region around the point (para. [0022], the external body condition is skin disorder, skin condition, skin cancer, aged skin, dead skin, sun tanning, wounds, allergy, inflammation, dermatitis, hives, marks, acne, redness, irritants, itching, swelling, sebaceous, lesions pimples, or wrinkles etc.; para. [0023], the internal body condition is blood flow, fever, erectile dysfunction, joints pain, muscles pain, grey hair, hair fall, eyebrows, cellular improvement, sleep disorder, jaundice, cancer, or any other internal body problem; para. [0016], The camera unit is scanning a user body by using a camera and capturing an image data of a treatment portion of the user body. The controlling unit by utilizing an artificial intelligence module, localization module, image processing module, localization module and object detection module to identify the image data and body parameters of the user body. The controlling unit retrieves the pre-stored data from the memory unit and compare the image data and the body parameters with the pre-stored data, based on the comparison the controlling unit identifies a treatment portion from the image data and also identifies an external body condition and/or an internal body condition from the identified treatment portion; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition; para. [0069], Localization Module: the localization module provides a complete visual module to localize objects using the camera. The localization module requires a preprocessing algorithm to segment a scene into objects. Ideally, the preprocessing algorithm should be able to segment an unstructured scene into objects using visual cues such as shape, texture, edges, and color in real-time.; para. [0070], Object detection Module: The Object detection module provides a way to identify specifically trained objects within the current image. Once the module is trained with sample template images it will identify those objects within the current image depending on the filtered parameters of confidence, size, rotation, etc.; para. [0085], the light therapy device 200 automatically scan, identify, select, prioritize and treat the at least one treatment portion with the help of the artificial intelligence, machine learning, object detection, image processing module and the localization module; para. [0086], the light therapy device is having a calibration protocol to identify the 3D co-ordinates (X, Y, and Z) of the portion of the user body to be treated. Further, the calibration protocol uses a 3D (3-dimensional) object detection method to extract the 3D co-ordinates (X, Y, and Z) of the treatment portion. Based upon the 3D co-ordinates the light direction controller 112 projects the light 114 on the treatment portion and follows the motion of the treatment portion of the user to a predefined distance.). Re Claim 21, Dijkstra discloses that the skin marking is a marked boundary and wherein the targeting controller is configured for targeting a region within the boundary (para. [0022], the external body condition is skin disorder, skin condition, skin cancer, aged skin, dead skin, sun tanning, wounds, allergy, inflammation, dermatitis, hives, marks, acne, redness, irritants, itching, swelling, sebaceous, lesions pimples, or wrinkles etc.; para. [0023], the internal body condition is blood flow, fever, erectile dysfunction, joints pain, muscles pain, grey hair, hair fall, eyebrows, cellular improvement, sleep disorder, jaundice, cancer, or any other internal body problem; para. [0016], The camera unit is scanning a user body by using a camera and capturing an image data of a treatment portion of the user body. The controlling unit by utilizing an artificial intelligence module, localization module, image processing module, localization module and object detection module to identify the image data and body parameters of the user body. The controlling unit retrieves the pre-stored data from the memory unit and compare the image data and the body parameters with the pre-stored data, based on the comparison the controlling unit identifies a treatment portion from the image data and also identifies an external body condition and/or an internal body condition from the identified treatment portion; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition; para. [0069], Localization Module: the localization module provides a complete visual module to localize objects using the camera. The localization module requires a preprocessing algorithm to segment a scene into objects. Ideally, the preprocessing algorithm should be able to segment an unstructured scene into objects using visual cues such as shape, texture, edges, and color in real-time.; para. [0070], Object detection Module: The Object detection module provides a way to identify specifically trained objects within the current image. Once the module is trained with sample template images it will identify those objects within the current image depending on the filtered parameters of confidence, size, rotation, etc.; para. [0085], the light therapy device 200 automatically scan, identify, select, prioritize and treat the at least one treatment portion with the help of the artificial intelligence, machine learning, object detection, image processing module and the localization module; para. [0086], the light therapy device is having a calibration protocol to identify the 3D co-ordinates (X, Y, and Z) of the portion of the user body to be treated. Further, the calibration protocol uses a 3D (3-dimensional) object detection method to extract the 3D co-ordinates (X, Y, and Z) of the treatment portion. Based upon the 3D co-ordinates the light direction controller 112 projects the light 114 on the treatment portion and follows the motion of the treatment portion of the user to a predefined distance.). Re Claim 22, Dijkstra discloses that the targeting controller employs boundary area analysis image processing on image data obtained by the vision sensor to determine the area within a marked boundary for targeting (para. [0022], the external body condition is skin disorder, skin condition, skin cancer, aged skin, dead skin, sun tanning, wounds, allergy, inflammation, dermatitis, hives, marks, acne, redness, irritants, itching, swelling, sebaceous, lesions pimples, or wrinkles etc.; para. [0023], the internal body condition is blood flow, fever, erectile dysfunction, joints pain, muscles pain, grey hair, hair fall, eyebrows, cellular improvement, sleep disorder, jaundice, cancer, or any other internal body problem; para. [0016], The camera unit is scanning a user body by using a camera and capturing an image data of a treatment portion of the user body. The controlling unit by utilizing an artificial intelligence module, localization module, image processing module, localization module and object detection module to identify the image data and body parameters of the user body. The controlling unit retrieves the pre-stored data from the memory unit and compare the image data and the body parameters with the pre-stored data, based on the comparison the controlling unit identifies a treatment portion from the image data and also identifies an external body condition and/or an internal body condition from the identified treatment portion; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition; para. [0069], Localization Module: the localization module provides a complete visual module to localize objects using the camera. The localization module requires a preprocessing algorithm to segment a scene into objects. Ideally, the preprocessing algorithm should be able to segment an unstructured scene into objects using visual cues such as shape, texture, edges, and color in real-time.; para. [0070], Object detection Module: The Object detection module provides a way to identify specifically trained objects within the current image. Once the module is trained with sample template images it will identify those objects within the current image depending on the filtered parameters of confidence, size, rotation, etc.; para. [0085], the light therapy device 200 automatically scan, identify, select, prioritize and treat the at least one treatment portion with the help of the artificial intelligence, machine learning, object detection, image processing module and the localization module; para. [0086], the light therapy device is having a calibration protocol to identify the 3D co-ordinates (X, Y, and Z) of the portion of the user body to be treated. Further, the calibration protocol uses a 3D (3-dimensional) object detection method to extract the 3D co-ordinates (X, Y, and Z) of the treatment portion. Based upon the 3D co-ordinates the light direction controller 112 projects the light 114 on the treatment portion and follows the motion of the treatment portion of the user to a predefined distance.). Re Claim 23, Dijkstra discloses that the skin marking is a visible skin marking (para. [0022], the external body condition is skin disorder, skin condition, skin cancer, aged skin, dead skin, sun tanning, wounds, allergy, inflammation, dermatitis, hives, marks, acne, redness, irritants, itching, swelling, sebaceous, lesions pimples, or wrinkles etc.; para. [0023], the internal body condition is blood flow, fever, erectile dysfunction, joints pain, muscles pain, grey hair, hair fall, eyebrows, cellular improvement, sleep disorder, jaundice, cancer, or any other internal body problem; para. [0016], The camera unit is scanning a user body by using a camera and capturing an image data of a treatment portion of the user body. The controlling unit by utilizing an artificial intelligence module, localization module, image processing module, localization module and object detection module to identify the image data and body parameters of the user body. The controlling unit retrieves the pre-stored data from the memory unit and compare the image data and the body parameters with the pre-stored data, based on the comparison the controlling unit identifies a treatment portion from the image data and also identifies an external body condition and/or an internal body condition from the identified treatment portion; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition; para. [0069], Localization Module: the localization module provides a complete visual module to localize objects using the camera. The localization module requires a preprocessing algorithm to segment a scene into objects. Ideally, the preprocessing algorithm should be able to segment an unstructured scene into objects using visual cues such as shape, texture, edges, and color in real-time.; para. [0070], Object detection Module: The Object detection module provides a way to identify specifically trained objects within the current image. Once the module is trained with sample template images it will identify those objects within the current image depending on the filtered parameters of confidence, size, rotation, etc.; para. [0085], the light therapy device 200 automatically scan, identify, select, prioritize and treat the at least one treatment portion with the help of the artificial intelligence, machine learning, object detection, image processing module and the localization module; para. [0086], the light therapy device is having a calibration protocol to identify the 3D co-ordinates (X, Y, and Z) of the portion of the user body to be treated. Further, the calibration protocol uses a 3D (3-dimensional) object detection method to extract the 3D co-ordinates (X, Y, and Z) of the treatment portion. Based upon the 3D co-ordinates the light direction controller 112 projects the light 114 on the treatment portion and follows the motion of the treatment portion of the user to a predefined distance.). Re Claim 24, Dijkstra discloses that the skin marking is an infrared visible skin marking (para. [0127], the light therapy device uses a thermographic camera. the thermographic camera forms an image using infrared radiation, similar to a common camera that forms an image using visible light. The detector elements create a very detailed temperature pattern called a Thermogram. Thermal imaging cameras take measuring temperature to the next level, instead of getting a number for the temperature it shows a picture displaying the temperature differences of a surface; para. [0128], the thermographic camera can easily identify the portions of the user body having different body temperature. This will allow the light therapy device to identify the areas with increased blood flow (Generally the areas with increased blood are mostly inflammations) and the light therapy device is able to provide treatment based on the blood flow or temperature of the body portion. The use of thermographic camera further allows the light therapy device to identify the areas which do not have sufficient blood flow the light therapy device by using the infrared or red light improves the rate of blood flow in these areas. Further, based on the temperature of different portions of the body the light therapy device can analyze the various body disorder easily and can provide treatment according to the body condition or disorder.). Re Claim 25, Dijkstra discloses that skin marking is indicated with reference to a display of image data captured by the vision sensor and wherein the ranging controller is configured to thereafter target the indicated marking (para. [0021], displaying the image data, the treatment portion of the user body, the external body condition, the internal body condition and the treatment information, wherein, the remote device is adapted to send input to the controller unit via communication means; para. [0093], The user 400 is able to see his/her portion of the body that needs to be treated on the mobile device 520 display and is able to adjust the portion of the body that needs to be treated according to best view; para. [0094], the light therapy device 200 and the image data from the camera unit 120 displays on the laptop display where the user can manually select and prioritize the body conditions based upon his/her own intellect; para. [0095], the user interface 430 is showing the image data on the display where the user can manually or the device can automatically identify, select, prioritize and treat the body conditions by using artificial intelligence, machine learning, localization module, image processing module, and object detection module; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition. Further, the controlling unit also utilizes the artificial intelligence module, machine learning module, localization module, object detection module and image processing module to automatically selecting a treatment based on the body condition of the user and automatically projecting a light based on the body condition without or with minimum user intervention). Re Claim 26, Dijkstra discloses that the sensor is a camera and wherein the ranging controller uses image processing on image data received therefrom to determine the target region (para. [0065], the camera unit 120 is having a camera 121 and at least one sensor 122. Wherein, the camera 121 is selected from a group of a normal optical camera, a thermographic camera, an infrared spectroscopy camera, an IP camera and a combination thereof; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition. Further, the controlling unit also utilizes the artificial intelligence module, machine learning module, localization module, object detection module and image processing module to automatically selecting a treatment based on the body condition of the user and automatically projecting a light based on the body condition without or with minimum user intervention.). Re Claim 27, Dijkstra discloses that the ranging controller targets a selected portion of a 3D patient model (para. [0086], the light therapy device is having a calibration protocol to identify the 3D co-ordinates (X, Y, and Z) of the portion of the user body to be treated. Further, the calibration protocol uses a 3D (3-dimensional) object detection method to extract the 3D co-ordinates (X, Y, and Z) of the treatment portion. Based upon the 3D co-ordinates the light direction controller 112 projects the light 114 on the treatment portion and follows the motion of the treatment portion of the user to a predefined distance; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition. Further, the controlling unit also utilizes the artificial intelligence module, machine learning module, localization module, object detection module and image processing module to automatically selecting a treatment based on the body condition of the user and automatically projecting a light based on the body condition without or with minimum user intervention; para. [0069], the localization module provides a complete visual module to localize objects using the camera. The localization module requires a preprocessing algorithm to segment a scene into objects. Ideally, the preprocessing algorithm should be able to segment an unstructured scene into objects using visual cues such as shape, texture, edges, and color in real-time. In the lack of a preprocessing algorithm which can satisfy all these constraints for a completely unstructured environment, one is forced to put some structure into the environment to make the detection and segmentation of objects easier; para. [0070], The Object detection module provides a way to identify specifically trained objects within the current image. Once the module is trained with sample template images it will identify those objects within the current image depending on the filtered parameters of confidence, size, rotation, etc.). Re Claim 28, Dijkstra discloses that the ranging controller uses image recognition to recognise the selected portion (para. [0086], the light therapy device is having a calibration protocol to identify the 3D co-ordinates (X, Y, and Z) of the portion of the user body to be treated. Further, the calibration protocol uses a 3D (3-dimensional) object detection method to extract the 3D co-ordinates (X, Y, and Z) of the treatment portion. Based upon the 3D co-ordinates the light direction controller 112 projects the light 114 on the treatment portion and follows the motion of the treatment portion of the user to a predefined distance; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize the at least one body condition. Further, the controlling unit also utilizes the artificial intelligence module, machine learning module, localization module, object detection module and image processing module to automatically selecting a treatment based on the body condition of the user and automatically projecting a light based on the body condition without or with minimum user intervention; para. [0069], the localization module provides a complete visual module to localize objects using the camera. The localization module requires a preprocessing algorithm to segment a scene into objects. Ideally, the preprocessing algorithm should be able to segment an unstructured scene into objects using visual cues such as shape, texture, edges, and color in real-time. In the lack of a preprocessing algorithm which can satisfy all these constraints for a completely unstructured environment, one is forced to put some structure into the environment to make the detection and segmentation of objects easier; para. [0070], The Object detection module provides a way to identify specifically trained objects within the current image. Once the module is trained with sample template images it will identify those objects within the current image depending on the filtered parameters of confidence, size, rotation, etc.). Re Claim 29, Dijkstra discloses that the system comprises a small form applicator device comprising the emitter and projector therein (fig. 1, fig. 2, light projection unit 110) and wherein the applicator device is operably coupled to a user interface device having a digital display (fig. 1, fig. 2, display unit 160) and wherein the digital display displays a user interface for controlling the controller thereon (para. [0078], the controlling unit 130 allows the user to manually identify at least one external body condition and/or at least one internal body condition via the display unit 160. Furthermore, the controlling unit 130 allows the user to select a treatment profile from the memory unit 140 or to create a new treatment profile based on his/her own intellect or other references e.g. internet, books, literature etc. via the display unit 160 at step 343.). Re Claim 30, Dijkstra discloses that the applicator device attaches to the user interface device (fig. 1, fig. 2, light projection unit 110, display unit 160) and wherein the controller further comprises a ranging controller operably coupled to a sensor for determining a target region and wherein the targeting controller controls the projector according to the target region determined by the ranging controller irrespective of the relative orientation and position of the user interface device and the transdermal target region (para. [0065], the camera unit 120 is having a camera 121 and at least one sensor 122. Wherein, the camera 121 is selected from a group of a normal optical camera, a thermographic camera, an infrared spectroscopy camera, an IP camera and a combination thereof; para. [0068], The controlling unit 130 uses an artificial intelligence module, machine learning module, localization module, object detection module and image processing module in order to automatically identify, select, localize and prioritize t
Read full office action

Prosecution Timeline

Nov 28, 2022
Application Filed
Sep 29, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594430
TEMPERATURE SENSING OF IMPLANTED WIRELESS RECHARGE COIL
2y 5m to grant Granted Apr 07, 2026
Patent 12582835
LIGHT THERAPY TREATMENT MODALITY WITH OSCILLATING AND NONOSCILLATING WAVELENGTHS
2y 5m to grant Granted Mar 24, 2026
Patent 12569196
WEARABLE PHYSIOLOGICAL MONITORING SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12564335
LOW POWER RECEIVER FOR IN VIVO CHANNEL SENSING AND INGESTIBLE SENSOR DETECTION WITH WANDERING FREQUENCY
2y 5m to grant Granted Mar 03, 2026
Patent 12478325
BIOLOGICAL INFORMATION MONITORING SYSTEM
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+44.6%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 269 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month