DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/26/2026 has been entered.
Response to Amendment
3. The amendment(s), filed on 01/26/2026, have been entered and made of record. Claims 1-21 are pending.
Response to Arguments
4. Applicant's arguments filed on 01/26/2026 with respect to claims 1-21 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 112
5. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
6. Claims 19 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
7. Claim 19 recites the limitation "the mobile computing device" in line 13. There is insufficient antecedent basis for this limitation in the claim.
8. Claim 20 recites the limitation "the mobile computing device" in line 13. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 102
9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
10. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
11. Claims 1-2, 8-16 and 18-21 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kang et al. (US-PGPUB 2020/0077023).
Regarding claim 1, Kang discloses a computer-implemented method (Method 1200; see fig. 12 and paragraph 0155), comprising:
receiving, by a mobile computing device (The computing device can include a mobile device; see paragraph 0155), one or more image parameters associated with a video frame of a plurality of video frames (Identify a rate and degree of movement of the image capture device. Identify motions in the frame resulting from the movement of the image capture device, such as shaking; see paragraph 0004);
receiving, from a motion sensor of the mobile computing device, motion data associated with the video frame (The image stabilization process can obtain motion measurements from a sensor, such as a gyroscope; see paragraph 0004. See step 1204, fig. 12 and paragraph 0141); and
predicting, by applying a neural network to the one or more image parameters and the motion data, a stabilized version of the video frame, wherein the stabilized version removes image degradations caused by a camera shake of the mobile computing device (Predict any movements of the image capture device before, while, and/or after the frame was capture. Identify motions in the frame and stabilize the frame. The image stabilization process uses machine learning to identify the motions in the frame and stabilize the frame to eliminate the motions. The image stabilization process implements a deep learning network to learn and identify the motions and stabilize the frames based on the motions identified. A deep learning neural network is trained with samples of motion sensor data to learn and recognize specific motion patterns and optimize the image stabilization performance based on a relevant motion pattern identified for the frame(s) being stabilized; see paragraphs 0004, 0006. Predict future motion patterns; see paragraphs 0064, 0143. See steps 1206 and 1208, fig. 12 and paragraphs 0144, 0145).
Regarding claim 2, Kang discloses everything claimed as applied above (see claim 1). In addition, Kang discloses the neural network comprises an encoder and a decoder (The techniques are performed by a video encoder/decoder; see paragraph 0165), and wherein applying the neural network comprises: applying the encoder (The neural network can include an autoencoder; see paragraphs 0159, 0106) to the one or more image parameters to generate a latent space representation (The deep learning network includes multiple layers; see figs. 6-7 and paragraph 0094); adjusting the latent space representation based on the motion data (ML EIS engine 312 adjusts the one or more frames according to the parameters to generate one or more adjusted frames having a reduction in at least some of the motions in the one or more frames; see fig. 12 and paragraph 0145); and applying the decoder to the latent space representation as adjusted to output the stabilized version (The destination device, such as the rendering engine, receives the encoded video data to be decoded via the computer-readable medium; see paragraphs 0161, 0040).
Regarding claim 8, Kang discloses everything claimed as applied above (see claim 2). In addition, Kang discloses applying the encoder further comprises: generating, from a pair of successive video frames of the plurality of video frames, an optical flow indicative of a correspondence between the pair of successive video frames; and generating the latent space representation based on the optical flow (Obtaining a sequence of frames captured by an image capture device during a period of time and collecting motion sensor measurements calculated by a motion sensor associated with the video capture device based on movement of the image capture device during the period of time. Using a deep learning network and the motion sensor measurements, parameters for counteracting the motions in the one or more frames, the motions resulting from the movement of the image capture device during the period of time; see paragraph 0009 and figs. 12, 7, 11C).
Regarding claim 9, Kang discloses everything claimed as applied above (see claim 1). In addition, Kang discloses training the neural network to receive a particular video frame and output, based on one or more image parameters and motion data associated with the particular video frame, a stabilized version of the particular video frame (The deep learning network 314 can be trained to output values that suppress or smooth out the ripples and thus counteract the jitter, shaking, and other undesirable motions. To stabilize the frame 310 and generate the stabilized frame 358A and/or 358B, the ML EIS engine 312 can apply the motion values or parameters calculated by the deep learning network 314 to reduce or counteract the identified motions in the frame 310; see paragraphs 0063, 0065, 0082, 0144-0145 and fig. 12).
Regarding claim 10, Kang discloses everything claimed as applied above (see claim 9). In addition, Kang discloses the training of the neural network further comprises adjusting, for the particular video frame, a difference between virtual camera poses for successive video frames (The ML EIS engine 312 adjusts the one or more frames according to the parameters to generate one or more adjusted frames having a reduction in at least some of the motions in the one or more frames; see paragraph 0145 and fig. 12).
Regarding claim 11, Kang discloses everything claimed as applied above (see claim 9). In addition, Kang discloses the training of the neural network further comprises adjusting, for the particular video frame, a first order difference between virtual camera poses for successive video frames (The differences in patterns of motion can vary across a wide array of circumstances. The differences in patterns of motion can affect image stabilization quality or performance. Accordingly, the disclosed techniques can implement machine learning to learn and categorize different patterns of motion and optimize the image stabilization operations and results based on the relevant set of circumstances of each case; see paragraph 0121).
Regarding claim 12, Kang discloses everything claimed as applied above (see claim 9). In addition, Kang discloses the training of the neural network further comprises adjusting, for the particular video frame, an angular difference between a real camera pose and a virtual camera pose (The patterns 404 include the angle and velocity of gyroscope measurements in the plotted vectors. The angles represent the gyroscope measurements and the velocity represents angle changes between consecutive gyroscope measurements. The EIS outputs represented by the plotted EIS pitch vector 406B, EIS roll vector 408B, and EIS yaw vector 410B appear as smooth lines with more gradual changes in angle (e.g., lower velocity), which indicates that jitter and shaking have been removed or reduced in the EIS outputs. Predicting future changes in movement, adjusting current and future angles to maintain an stabilized output; see paragraphs 0004, 0064, 0071, 0084-0090).
Regarding claim 13, Kang discloses everything claimed as applied above (see claim 12). In addition, Kang discloses the adjusting of the angular difference further comprises: upon a determination that the angular difference exceeds a threshold angle, reducing the angular difference between the real camera pose and the virtual camera pose (The optimal solution for machine learning can be determined based on the EIS outputs that best minimize or filter out the ripples (e.g., 412A-C) from the gyroscope samples 402 without exceeding a threshold distance from the plotted input pitch vector 406A, input roll vector 408A, and input yaw vector 410A. In other words, in the optimal solution, the plotted EIS outputs should minimize or filter out the ripples (e.g., 412A-C) in the input gyroscope samples 402 plotted (e.g., the input pitch vector 406A, input roll vector 408A, and input yaw vector 410A), with minimal deviation (e.g., below a threshold) in the angles or general trajectory between the plotted EIS outputs and the input gyroscope samples 402 plotted; see paragraph 0086).
Regarding claim 14, Kang discloses everything claimed as applied above (see claim 9). In addition, Kang discloses the training of the neural network further comprises adjusting, for the particular video frame, an area of a distorted region indicative of an undesired motion of the mobile computing device (Therefore, the deep learning network 314 can be trained to output values that suppress or smooth out the ripples and thus counteract the jitter, shaking, and other undesirable motions; see paragraph 0063).
Regarding claim 15, Kang discloses everything claimed as applied above (see claim 14). In addition, Kang discloses the adjusting of the area of the distorted region comprises: determining areas of distorted regions in one or more video frames that appear after the particular video frame; and applying weights to the areas of the distorted regions, wherein the weights as applied are configured to decrease with distance of a video frame, of the one or more video frames, from the particular video frame (By adjusting/tuning the weights based on the velocity feedback 720 and the angle feedback 722, the deep learning network 314 can reduce or suppress ripples in its velocity output (e.g., 710) and reduce or suppress delay in its angle output (e.g., 712). The process can repeat for a certain number of iterations for each set of training data (e.g., the gyroscope samples 702) until the weights in the deep learning network 314 are accurately tuned. Weights can be updated so they change in the opposite direction of the gradient; see paragraphs 0012, 0098, 0111, 0101-0105).
Regarding claim 16, Kang discloses everything claimed as applied above (see claim 9). In addition, Kang discloses the training of the neural network further comprises adjusting, for the particular video frame, an image loss (The deep learning network 314 can perform a backward pass by determining which inputs (weights) most contributed to the loss of the deep learning network 314, and can adjust the weights so the loss decreases and is eventually minimized; see paragraphs 0101-0105).
Regarding claim 18, Kang discloses everything claimed as applied above (see claim 1). In addition, Kang discloses predicting the stabilized version of the video frame comprises: obtaining the trained neural network at the mobile computing device; and applying the trained neural network as obtained to the predicting of the stabilized version (The ML EIS engine 312 uses the deep learning network 314 and motion measurements to generate parameters for counteracting motions in the one or more frames. The ML EIS engine 312 adjusts the one or more frames according to the parameters to generate one or more adjusted frames having a reduction in at least some of the motions in the one or more frames; see steps 1206, 1208, paragraph 0144-0145 and figs. 12, 3B).
Regarding claim 19, Kang discloses a computing device (Computing device 1300; see fig. 13 and paragraph 0155), comprising:
one or more processors (Processor 1310; see fig. 13 and paragraph 0174); and data storage (Memories/storage 1315, 1320, 1325, 1330; see fig. 13 and paragraph 0174) wherein the data storage has stored thereon computer-executable instructions (Program code; see paragraphs 0172, 0173, 0158, 0037-0038) that, when executed by the one or more processors, cause the computing device to carry out functions comprising:
receiving, by the computing device, one or more image parameters associated with a video frame of a plurality of video frames (Identify a rate and degree of movement of the image capture device. Identify motions in the frame resulting from the movement of the image capture device, such as shaking; see paragraph 0004);
receiving, from a motion sensor of the computing device, motion data associated with the video frame (The image stabilization process can obtain motion measurements from a sensor, such as a gyroscope; see paragraph 0004. See step 1204, fig. 12 and paragraph 0141); and
predicting, by applying a neural network to the one or more image parameters and the motion data, a stabilized version of the video frame, wherein the stabilized version removes image degradations caused by a camera shake of the mobile computing device (Predict any movements of the image capture device before, while, and/or after the frame was capture. Identify motions in the frame and stabilize the frame. The image stabilization process uses machine learning to identify the motions in the frame and stabilize the frame to eliminate the motions. The image stabilization process implements a deep learning network to learn and identify the motions and stabilize the frames based on the motions identified. A deep learning neural network is trained with samples of motion sensor data to learn and recognize specific motion patterns and optimize the image stabilization performance based on a relevant motion pattern identified for the frame(s) being stabilized; see paragraphs 0004, 0006. Predict future motion patterns; see paragraphs 0064, 0143. See steps 1206 and 1208, fig. 12 and paragraphs 0144, 0145).
Regarding claim 20, Kang discloses an article of manufacture comprising one or more non-transitory computer readable media having computer-readable instructions stored thereon (Computer-readable medium having stored thereon code and a processor to execute the code; see paragraphs 0172, 0173, 0158, 0037-0038) that, when executed by one or more processors of a computing device (Computing device 1300; see fig. 13 and paragraph 0155), cause the computing device to carry out functions comprising:
receiving, by the computing device, one or more image parameters associated with a video frame of a plurality of video frames (Identify a rate and degree of movement of the image capture device. Identify motions in the frame resulting from the movement of the image capture device, such as shaking; see paragraph 0004);
receiving, from a motion sensor of the computing device, motion data associated with the video frame (The image stabilization process can obtain motion measurements from a sensor, such as a gyroscope; see paragraph 0004. See step 1204, fig. 12 and paragraph 0141); and
predicting, by applying a neural network to the one or more image parameters and the motion data, a stabilized version of the video frame, wherein the stabilized version removes image degradations caused by a camera shake of the mobile computing device (Predict any movements of the image capture device before, while, and/or after the frame was capture. Identify motions in the frame and stabilize the frame. The image stabilization process uses machine learning to identify the motions in the frame and stabilize the frame to eliminate the motions. The image stabilization process implements a deep learning network to learn and identify the motions and stabilize the frames based on the motions identified. A deep learning neural network is trained with samples of motion sensor data to learn and recognize specific motion patterns and optimize the image stabilization performance based on a relevant motion pattern identified for the frame(s) being stabilized; see paragraphs 0004, 0006. Predict future motion patterns; see paragraphs 0064, 0143. See steps 1206 and 1208, fig. 12 and paragraphs 0144, 0145).
Regarding claim 21, Kang discloses everything claimed as applied above (see claim 1). In addition, Kang discloses the image degradations caused by a camera shake comprise one or more of a motion blur, a rolling shutter, image degradations caused by an unintentional shaking of a user's hand, or image degradations caused by a panning video (The motion measurements can be used to identify motions in the frame resulting from the movement of the image capture device, such as shaking or vibrations; see paragraph 0004. When plotted, jitter, shaking and other undesirable motions may appear as ripples, while other movements such as panning or linear movements (e.g., forward or backward acceleration) may appear as smooth or more gradual changes in angle. The desired output is to suppress or smooth out the ripples in the plotted motion sensor measurements. Therefore, the deep learning network 314 can be trained to output values that suppress or smooth out the ripples and thus counteract the jitter, shaking, and other undesirable motions; see paragraphs 0063, 0085, 0144).
Claim Rejections - 35 USC § 103
12. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
13. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
14. Claims 3 and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Kang in view of Wang et al. (US-PGPUB 2020/0364554).
Regarding claim 3, Kang discloses everything claimed as applied above (see claim 2). However, Kang does not expressly disclose generating, from the motion data, a real camera pose associated with the video frame, and wherein the latent space representation is based on the real camera pose.
On the other hand, Wang discloses generating, from the motion data, a real camera pose associated with the video frame, and wherein the latent space representation is based on the real camera pose (Generate coarse camera pose using semantic map data, image data parameter and motion sensor data. Use the coarse camera pose and a camera intrinsic parameter to create first semantic label map and provide both the image data and the first semantic label map to pose CNN 112 to obtain a corrected camera pose; see figs. 6, 1 and paragraphs 0089, 0032-0033).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kang and Wang to provide generating, from the motion data, a real camera pose associated with the video frame, and wherein the latent space representation is based on the real camera pose for the purpose of improving the precision and quality of the stabilized video frame.
Regarding claim 6, Kang discloses everything claimed as applied above (see claim 2). However, Kang does not disclose determining a first history of real camera poses and a second history of virtual camera poses, and wherein the latent space representation is based on the first history of the real camera poses and the second history of the virtual camera poses.
Nevertheless, Wang discloses determining a first history of real camera poses (Generating coarse camera poses; see fig. 6 and paragraph 0089) and a second history of virtual camera poses (Obtaining corrected camera poses using CNN; see fig. 6), and wherein the latent space representation is based on the first history of the real camera poses and the second history of the virtual camera poses (Using the corrected camera poses in a Pose RNN to generate refined camera poses; see fig. 6 and paragraph 0089).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kang and Wang to provide determining a first history of real camera poses and a second history of virtual camera poses, and wherein the latent space representation is based on the first history of the real camera poses and the second history of the virtual camera poses for the purpose of improving the precision and quality of the stabilized video frame.
Regarding claim 7, Kang discloses everything claimed as applied above (see claim 2). In addition, Kang discloses the motion data comprises rotation data and the predicting of the stabilized version is based on a relative rotation (The motion measurements can indicate an angle of rotation and velocity of change in angles. The motion sensor 106 can measure the extent and rate of rotation (e.g., roll, pitch, and yaw) of the image capture devices. Measurements describing rotational movement are used as training input for the stabilized frame; see paragraphs 0004, 0043, 0107).
However, Kang does not expressly disclose timestamp data.
On the other hand, Wang discloses the motion data comprises rotation data and timestamp data, and the method further comprising: determining, from the rotation data and the timestamp data, a relative rotation of a camera pose in the video frame relative to a reference camera pose in a reference video frame, and wherein the predicting of the stabilized version is based on the relative rotation (Pose network 112 calculates relative rotation and translation to yield corrected camera pose 113. To build temporal correlations, the corrected poses 113 are fed into a RNN to improve the pose accuracy in the stream and provides higher order temporal information. Given the rectified or corrected camera pose 113, refined rendered label map 140 can be generated. Refined camera pose 118 is generated from map 140 and segment CNN 166, figs. 6, 1 and paragraphs 0089-0090, 0032-0033).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kang and Wang to provide the motion data comprises rotation data and timestamp data, and the method further comprising: determining, from the rotation data and the timestamp data, a relative rotation of a camera pose in the video frame relative to a reference camera pose in a reference video frame, and wherein the predicting of the stabilized version is based on the relative rotation for the purpose of improving the precision and quality of the stabilized video frame.
15. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Kang in view of Zhang et al. (US-PGPUB 2022/0292867).
Regarding claim 4, Kang discloses everything claimed as applied above (see claim 2). However, Kang fails to disclose the decoder comprises a long short-term memory (LSTM) component, and wherein applying the decoder further comprises applying the LSTM component to predict a virtual camera pose.
On the other hand, Zhang discloses the decoder comprises a long short-term memory (LSTM) component, and wherein applying the decoder further comprises applying the LSTM component to predict movement (The decoder model 109 is generally configured to generate output to predict the movement of a given person depicted in an image at time interval t. In one embodiment, the decoder model 109 leverages hierarchical LSTMs 111 to progressively decode the feature vectors and predict the offset (e.g., an output vector) of the location of each person; see paragraphs 0037, 0039, 0046).
Since Kang and Zhang are both directed to predicting specific movements and position in a video, then it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kang and Zhang to provide the decoder comprises a long short-term memory (LSTM) component, and wherein applying the decoder further comprises applying the LSTM component to predict a virtual camera pose for the purpose of improving the precision of the stabilized video frame.
16. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Kang in view of Zhang and further in view of Wang.
Regarding claim 5, Kang and Zhang disclose everything claimed as applied above (see claim 4). However, Kang and Zhang do not disclose the decoder comprises a warping grid, and wherein applying the decoder further comprises applying the warping grid to the predicted virtual camera pose to output the stabilized version.
Nevertheless, Wang discloses the decoder comprises a warping grid, and wherein applying the decoder further comprises applying the warping grid to the predicted virtual camera pose to output the stabilized version (rendering a 2D road map image with a rasterization grid; see paragraph 0059).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kang, Zhang and Wang to provide the decoder comprises a warping grid, and wherein applying the decoder further comprises applying the warping grid to the predicted virtual camera pose to output the stabilized version for the purpose of improving the precision of the stabilized video frame.
17. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Kang in view of Herman et al. (US Patent 11,166,003).
Regarding claim 17, Kang discloses everything claimed as applied above (see claim 1). However, Kang fails to disclose the one or more image parameters comprise optical image stabilization (OIS) data indicative of a lens position, and wherein the applying of the neural network comprises predicting a lens offset for a virtual camera based on the lens position.
Nevertheless, Herman discloses the applying of the neural network comprises predicting a lens offset for a virtual camera based on the lens position (The finite element model's prediction can be incorporated into a trained neural network or other algorithm to improve and enable real time prediction of the state of the lens assembly.
The image processor 310 and/or the computer 110 processor can use machine learning techniques to predict image distortion based on the predicted lens displacement; see col. 8, lines 6-37).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Wang and Herman to provide the one or more image parameters comprise optical image stabilization (OIS) data indicative of a lens position, and wherein the applying of the neural network comprises predicting a lens offset for a virtual camera based on the lens position for the purpose of improving image quality by effectively predicting image distortion.
Contact Information
18. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CYNTHIA CALDERON whose telephone number is (571)270-3580. The examiner can normally be reached M-F 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TWYLER HASKINS can be reached at (571)272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CYNTHIA CALDERON/Primary Examiner, Art Unit 2639 02/19/2026