DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see Applicants Remarks pages , filed 01/23/26, with respect to the rejection(s) of claim(s) 1-8 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Siminoff US 20180114421.
Regarding claim 1, Applicant has amended claim 1 to recite at least one processor is configured to acquire posture detection data related to posture detected via an internal sensor of the measurement apparatus, the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition (Applicants Remarks pages 8-9). Examiner agrees with Applicant. Siminoff teaches monitoring and detecting (block B670) motion in at least one motion zone of the A/V recording and communication device (measurement apparatus), such as (but not limited to) the A/V recording and communication doorbell 130. motion may be detected using the PIR sensors 144 (internal sensor). Upon detecting motion, the process may include determining whether at least one intrusion zone conditional setting is satisfied. if a conditional setting of the at least one intrusion zone comprises a body posture of a person within the motion zone, then the process may compare a detected body posture of a person within the motion zone to one or more preset body postures to determine whether the detected body posture of a person within the motion zone matches the one or more preset body postures (the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition) (paragraph 0131-0132).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, and 6-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over LaChapelle et al US 20180299534 in view of Katata et al US 20210312564 further in view of Siminoff US 20180114421.
Regarding claim 1, LaChapelle et al teaches an information processing apparatus that processes segmented point cloud data output from a measurement apparatus including an external sensor that includes a first sensor that acquires first segmented point cloud data by scanning laser light (The lidar system 100 can use the scan path to generate a point cloud (paragraph 0057) and a second sensor that acquires second segmented point cloud data based on a plurality of camera images (a camera 606 cooperates with a lidar system 608 to obtain information about the environment. The system 600 may operate in a vehicle and collect information (point cloud) about the area in front of the vehicle (paragraph 0123) Note: the camera 606 can capture a large number of pixels at the same time (paragraph 0123). Each of these depth-mapped points may be referred to as a pixel. A collection of pixels captured in succession (which may be referred to as a depth map, a point cloud, or a frame) may be rendered as an image (paragraph 0063), and repeatedly scans a surrounding space to acquire the segmented point cloud data ( The lidar system 100 may be configured to repeatedly capture or generate point clouds of a field of regard (paragraph 0064), which includes the first segmented point cloud data and the second segmented point cloud data, for each scan (in accordance with a scan pattern corresponding to one channel (see FIG. 5) or several channels (see FIG. 6). The pixels obtained by the lidar system 608 may be referred to below as “lidar pixels” for clarity. The camera 606 may obtain photographic images at a certain rate concurrently with the lidar system 608 obtaining lidar pixels. (paragraph 0125), and
wherein the processor generates the combined segmented point cloud data by partially selecting data from each of the first segmented point cloud data and the second segmented point cloud data based on a feature of a structure shown in at least one camera image among the plurality of camera images, and generates the combined point cloud data by combining a plurality of pieces of the generated combined segmented point cloud data (the image processing software 620 may generate a combined, time-aligned data set. For example, the image processing software 620 may generate and store in the memory 602 an array corresponding to the higher one of the two resolutions (of the camera 606 and the lidar system 608), with each element storing both a lidar pixel and a camera pixel (paragraph 0131-0132 and fig 14).
LaChappelle et al fails to teach an internal sensor that detects a posture to acquire posture detection data, the information processing apparatus comprising:
at least one processor, wherein the processor is configured to:
generate combined point cloud data by executing combination processing using a plurality of pieces of the segmented point cloud data acquired in a period during which the posture detection data satisfies an allowable condition, among a plurality of pieces of the segmented point cloud data acquired at different acquisition times by the external sensor,
Katata et al teaches an internal sensor that detects a posture to acquire posture detection data (position sensor is provided to detect the position of each external sensor, and the three-axis sensor is provided to detect the posture of each external sensor (paragraph 0040), the information processing apparatus comprising:
at least one processor (control unit 24 includes a CPU (paragraph 0041), wherein the processor is configured to:
generate combined point cloud data by executing combination processing using a plurality of pieces of the segmented point cloud data acquired in a period during which the posture detection data satisfies an allowable condition (the sensor unit 27 includes a plurality of external sensors for recognizing the peripheral environment of the vehicle such as a lidar and a camera, and also includes a plurality of internal field sensors such as a GPS receiver, a gyro sensor, a position sensor, a triaxial sensor. The lidar discretely measures the distance to an object existing in the outside world, recognizes the surface of the object as a three-dimensional point cloud, and generates point cloud data. The camera generates image data taken from the vehicle. The position sensor is provided to detect the position of each external sensor, and the three-axis sensor is provided to detect the posture of each external sensor (paragraph 0040) Note: the distance of an object would read on allowable condition since if the object is not close enough, the sensor will not detect it, among a plurality of pieces of the segmented point cloud data acquired at different acquisition times by the external sensor (upload data generating unit generates upload data based on the own vehicle position information supplied from the position estimating unit, the object data supplied from the object detecting unit, and the output data of the sensor unit 27 supplied from the sensor data cache (paragraph 0044) Note: the uploaded data is a combination of different data that would have been collected at different times,
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified LaChappelle et al to include: an internal sensor that detects a posture to acquire posture detection data, the information processing apparatus comprising: at least one processor, wherein the processor is configured to: generate combined point cloud data by executing combination processing using a plurality of pieces of the segmented point cloud data acquired in a period during which the posture detection data satisfies an allowable condition, among a plurality of pieces of the segmented point cloud data acquired at different acquisition times by the external sensor
The reason of doing so would to accurately detect the environment around vehicle to detect objects or obstructions accurately.
LaChappelle et al in view of Katata et al fails to teach acquire posture detection data related to posture detected via an internal sensor of the measurement apparatus, the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition
Siminoff teaches acquire posture detection data related to posture detected via an internal sensor of the measurement apparatus, the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition (monitoring and detecting (block B670) motion in at least one motion zone of the A/V recording and communication device (measurement apparatus), such as (but not limited to) the A/V recording and communication doorbell 130. motion may be detected using the PIR sensors 144 (internal sensor). Upon detecting motion, the process may include determining whether at least one intrusion zone conditional setting is satisfied. if a conditional setting of the at least one intrusion zone comprises a body posture of a person within the motion zone, then the process may compare a detected body posture of a person within the motion zone to one or more preset body postures to determine whether the detected body posture of a person within the motion zone matches the one or more preset body postures (the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition) (paragraph 0131-0132).
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified LaChappelle et al in view of Katata et al to include: acquire posture detection data related to posture detected via an internal sensor of the measurement apparatus, the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition
The reason of doing so would to accurately detect the environment around vehicle to detect objects or obstructions accurately.
Regarding claim 6, LaChappelle et al teaches wherein the measurement apparatus is provided in an unmanned moving object (the lidar system 100 integrated into an autonomous vehicle (paragraph 0098).
Regarding claim 7, LaChappelle et al teaches an information processing method that processes segmented point cloud data output from a measurement apparatus including an external sensor that includes a first sensor that acquires first segmented point cloud data by scanning laser light (The lidar system 100 can use the scan path to generate a point cloud (paragraph 0057) and a second sensor that acquires second segmented point cloud data based on a plurality of camera images (a camera 606 cooperates with a lidar system 608 to obtain information about the environment. The system 600 may operate in a vehicle and collect information (point cloud) about the area in front of the vehicle (paragraph 0123) Note: the camera 606 can capture a large number of pixels at the same time (paragraph 0123). Each of these depth-mapped points may be referred to as a pixel. A collection of pixels captured in succession (which may be referred to as a depth map, a point cloud, or a frame) may be rendered as an image (paragraph 0063), and repeatedly scans a surrounding space to acquire the segmented point cloud data, which includes the first segmented point cloud data and the second segmented point cloud data, for each scan, and an internal sensor that detects a posture to acquire posture detection data ( The lidar system 100 may be configured to repeatedly capture or generate point clouds of a field of regard (paragraph 0064), which includes the first segmented point cloud data and the second segmented point cloud data, for each scan (in accordance with a scan pattern corresponding to one channel (see FIG. 5) or several channels (see FIG. 6). The pixels obtained by the lidar system 608 may be referred to below as “lidar pixels” for clarity. The camera 606 may obtain photographic images at a certain rate concurrently with the lidar system 608 obtaining lidar pixels. (paragraph 0125),
wherein the combined segmented point cloud data is generated by partially selecting data from each of the first segmented point cloud data and the second segmented point cloud data based on a feature of a structure shown in at least one camera image among the plurality of camera images, and the combined point cloud data is generated by combining a plurality of pieces of the generated combined segmented point cloud data (the image processing software 620 may generate a combined, time-aligned data set. For example, the image processing software 620 may generate and store in the memory 602 an array corresponding to the higher one of the two resolutions (of the camera 606 and the lidar system 608), with each element storing both a lidar pixel and a camera pixel (paragraph 0131-0132 and fig 14).
LaChappelle et al fails to teach the information processing method comprising:
generating combined point cloud data by executing combination processing using a plurality of pieces of the segmented point cloud data acquired in a period during which the posture detection data satisfies an allowable condition, among a plurality of pieces of the segmented point cloud data acquired at different acquisition times by the external sensor,
Katata et al teaches the information processing method comprising:
generating combined point cloud data by executing combination processing using a plurality of pieces of the segmented point cloud data acquired in a period during which the posture detection data satisfies an allowable condition (the sensor unit 27 includes a plurality of external sensors for recognizing the peripheral environment of the vehicle such as a lidar and a camera, and also includes a plurality of internal field sensors such as a GPS receiver, a gyro sensor, a position sensor, a triaxial sensor. The lidar discretely measures the distance to an object existing in the outside world, recognizes the surface of the object as a three-dimensional point cloud, and generates point cloud data. The camera generates image data taken from the vehicle. The position sensor is provided to detect the position of each external sensor, and the three-axis sensor is provided to detect the posture of each external sensor (paragraph 0040) Note: the distance of an object would read on allowable condition since if the object is not close enough, the sensor will not detect it, among a plurality of pieces of the segmented point cloud data acquired at different acquisition times by the external sensor (upload data generating unit generates upload data based on the own vehicle position information supplied from the position estimating unit, the object data supplied from the object detecting unit, and the output data of the sensor unit 27 supplied from the sensor data cache (paragraph 0044) Note: the uploaded data is a combination of different data that would have been collected at different times,
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified LaChappelle et al to include: the information processing method comprising:
generating combined point cloud data by executing combination processing using a plurality of pieces of the segmented point cloud data acquired in a period during which the posture detection data satisfies an allowable condition, among a plurality of pieces of the segmented point cloud data acquired at different acquisition times by the external sensor,
The reason of doing so would to accurately detect the environment around vehicle to detect objects or obstructions accurately.
LaChappelle et al in view of Katata et al fails to teach acquiring posture detection data related to posture detected via an internal sensor of the measurement apparatus, the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition
Siminoff teaches acquiring posture detection data related to posture detected via an internal sensor of the measurement apparatus, the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition (monitoring and detecting (block B670) motion in at least one motion zone of the A/V recording and communication device (measurement apparatus), such as (but not limited to) the A/V recording and communication doorbell 130. motion may be detected using the PIR sensors 144 (internal sensor). Upon detecting motion, the process may include determining whether at least one intrusion zone conditional setting is satisfied. if a conditional setting of the at least one intrusion zone comprises a body posture of a person within the motion zone, then the process may compare a detected body posture of a person within the motion zone to one or more preset body postures to determine whether the detected body posture of a person within the motion zone matches the one or more preset body postures (the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition) (paragraph 0131-0132).
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified LaChappelle et al in view of Katata et al to include: acquiring posture detection data related to posture detected via an internal sensor of the measurement apparatus, the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition
The reason of doing so would to accurately detect the environment around vehicle to detect objects or obstructions accurately.
Regarding claim 8, LaChappelle et al teaches a non-transitory computer-readable storage medium storing a program that causes a computer to execute processing (one or more computer programs (e.g., one or more modules of computer-program instructions encoded or stored on a computer-readable non-transitory storage medium) (paragraph 0137) on segmented point cloud data output from a measurement apparatus including an external sensor that includes a first sensor that acquires first segmented point cloud data by scanning laser light (The lidar system 100 can use the scan path to generate a point cloud (paragraph 0057) and a second sensor that acquires second segmented point cloud data based on a plurality of camera images (a camera 606 cooperates with a lidar system 608 to obtain information about the environment. The system 600 may operate in a vehicle and collect information (point cloud) about the area in front of the vehicle (paragraph 0123) Note: the camera 606 can capture a large number of pixels at the same time (paragraph 0123). Each of these depth-mapped points may be referred to as a pixel. A collection of pixels captured in succession (which may be referred to as a depth map, a point cloud, or a frame) may be rendered as an image (paragraph 0063), and repeatedly scans a surrounding space to acquire the segmented point cloud data ( The lidar system 100 may be configured to repeatedly capture or generate point clouds of a field of regard (paragraph 0064), which includes the first segmented point cloud data and the second segmented point cloud data, for each scan (in accordance with a scan pattern corresponding to one channel (see FIG. 5) or several channels (see FIG. 6). The pixels obtained by the lidar system 608 may be referred to below as “lidar pixels” for clarity. The camera 606 may obtain photographic images at a certain rate concurrently with the lidar system 608 obtaining lidar pixels. (paragraph 0125), and
wherein the combined segmented point cloud data is generated by partially selecting data from each of the first segmented point cloud data and the second segmented point cloud data based on a feature of a structure shown in at least one camera image among the plurality of camera images, and the combined point cloud data is generated by combining a plurality of pieces of the generated combined segmented point cloud data (the image processing software 620 may generate a combined, time-aligned data set. For example, the image processing software 620 may generate and store in the memory 602 an array corresponding to the higher one of the two resolutions (of the camera 606 and the lidar system 608), with each element storing both a lidar pixel and a camera pixel (paragraph 0131-0132 and fig 14).
LaChappelle et al fails to teach an internal sensor that detects a posture to acquire posture detection data, the program causing the computer to execute:
combination processing of generating combined point cloud data using a plurality of pieces of the segmented point cloud data acquired in a period during which the posture detection data satisfies an allowable condition, among a plurality of pieces of the segmented point cloud data acquired at different acquisition times by the external sensor,
Katata et al teaches an internal sensor that detects a posture to acquire posture detection data (position sensor is provided to detect the position of each external sensor, and the three-axis sensor is provided to detect the posture of each external sensor (paragraph 0040), the program causing the computer to execute:
combination processing of generating combined point cloud data using a plurality of pieces of the segmented point cloud data acquired in a period during which the posture detection data satisfies an allowable condition (the sensor unit 27 includes a plurality of external sensors for recognizing the peripheral environment of the vehicle such as a lidar and a camera, and also includes a plurality of internal field sensors such as a GPS receiver, a gyro sensor, a position sensor, a triaxial sensor. The lidar discretely measures the distance to an object existing in the outside world, recognizes the surface of the object as a three-dimensional point cloud, and generates point cloud data. The camera generates image data taken from the vehicle. The position sensor is provided to detect the position of each external sensor, and the three-axis sensor is provided to detect the posture of each external sensor (paragraph 0040) Note: the distance of an object would read on allowable condition since if the object is not close enough, the sensor will not detect it, among a plurality of pieces of the segmented point cloud data acquired at different acquisition times by the external sensor (upload data generating unit generates upload data based on the own vehicle position information supplied from the position estimating unit, the object data supplied from the object detecting unit, and the output data of the sensor unit 27 supplied from the sensor data cache (paragraph 0044) Note: the uploaded data is a combination of different data that would have been collected at different times,
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified LaChappelle et al to include: an internal sensor that detects a posture to acquire posture detection data, the program causing the computer to execute: combination processing of generating combined point cloud data using a plurality of pieces of the segmented point cloud data acquired in a period during which the posture detection data satisfies an allowable condition, among a plurality of pieces of the segmented point cloud data acquired at different acquisition times by the external sensor,
The reason of doing so would to accurately detect the environment around vehicle to detect objects or obstructions accurately.
LaChappelle et al in view of Katata et al fails to teach acquiring posture detection data related to posture detected via an internal sensor of the measurement apparatus, the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition
Siminoff teaches acquire posture detection data related to posture detected via an internal sensor of the measurement apparatus, the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition (monitoring and detecting (block B670) motion in at least one motion zone of the A/V recording and communication device (measurement apparatus), such as (but not limited to) the A/V recording and communication doorbell 130. motion may be detected using the PIR sensors 144 (internal sensor). Upon detecting motion, the process may include determining whether at least one intrusion zone conditional setting is satisfied. if a conditional setting of the at least one intrusion zone comprises a body posture of a person within the motion zone, then the process may compare a detected body posture of a person within the motion zone to one or more preset body postures to determine whether the detected body posture of a person within the motion zone matches the one or more preset body postures (the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition) (paragraph 0131-0132).
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified LaChappelle et al in view of Katata et al to include: acquiring posture detection data related to posture detected via an internal sensor of the measurement apparatus, the posture detection data indicating whether the posture is in a state that satisfies an allowable condition or in another state that does not satisfy the allowable condition
The reason of doing so would to accurately detect the environment around vehicle to detect objects or obstructions accurately.
Claim(s) 2 and 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over LaChapelle et al US 20180299534 in view of Katata et al US 20210312564 further in view of Siminoff US 20180114421 further in view of Matsushita US 20150354967.
Regarding claim 2, LaChappelle et al in view of Katata et al further in view of Siminoff teaches all of the limitations of claim 1
LaChappelle et al in view of Katata et al further in view of Siminoff fails to teach wherein the internal sensor is an inertial measurement sensor having at least one of an acceleration sensor or an angular velocity sensor, and the posture detection data includes an output value of the acceleration sensor or of the angular velocity sensor.
Matsushita teaches wherein the internal sensor is an inertial measurement sensor having at least one of an acceleration sensor or an angular velocity sensor (inertial device 1 has inertial sensors (e.g. an acceleration sensor, an angular velocity sensor, a magnetic field sensor, etc.) (paragraph 0037), and the posture detection data includes an output value of the acceleration sensor or of the angular velocity sensor (The posture calculation unit 104 may calculate the current posture of the inertial device 1 using the sensor data obtained by the acceleration acquisition unit 101, the angular velocity acquisition unit 102 (paragraph 0058).
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified LaChappelle et al in view of Katata et al further in view of Siminoff to include: wherein the internal sensor is an inertial measurement sensor having at least one of an acceleration sensor or an angular velocity sensor, and the posture detection data includes an output value of the acceleration sensor or of the angular velocity sensor.
The reason of doing so would to accurately detect the environment around vehicle to detect objects or obstructions accurately.
Regarding claim 3, LaChappelle et al in view of Katata et al fails to teach wherein the allowable condition is that an absolute value of the output value of the acceleration sensor or of the angular velocity sensor is less than a first threshold value.
Matsushita teaches wherein the allowable condition is that an absolute value of the output value of the acceleration sensor or of the angular velocity sensor is less than a first threshold value (obtain sensor data including acceleration on three axes (3 axis acceleration), angular velocity on three axes (3 axis angular velocity), and strength of the magnetic field on three axes (3 axis magnetic field strength) as needed. A coordinate system depends on a device or sensor type, which is called the “device coordinate system”. Thus, measured values obtained in the device coordinate system are converted into the an absolute coordinate system (absolute value of output value) (paragraph 0038). Note: because the coordinate system depends on the sensor type, it would be obvious that if the value of a sensor is lower than a threshold, then the coordinate system would obtain the measured value of the sensor
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified LaChappelle et al in view of Katata et al further in view of Siminoff to include: wherein the allowable condition is that an absolute value of the output value of the acceleration sensor or of the angular velocity sensor is less than a first threshold value.
The reason of doing so would to accurately detect the environment around vehicle to detect objects or obstructions accurately.
Allowable Subject Matter
Claims 4 and 5 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication should be directed to Michael Burleson whose telephone number is (571) 272-7460 and fax number is (571) 273-7460. The examiner can normally be reached Monday thru Friday from 8:00 a.m. – 4:30p.m. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at (571) 270- 3438.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Michael Burleson
Patent Examiner
Art Unit 2681
Michael Burleson
March 30, 2026
/MICHAEL BURLESON/
/AKWASI M SARPONG/SPE, Art Unit 2681 4/3/2026