Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 7/25/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim 15, 17-20 are rejected under 35 U.S.C. 102a1 as being anticipated by US Patent 10863294 to Yang et al.
With regard to claim 15, Yang et al discloses a computer-implemented glitch detection method, comprising: for each audio data segment of a plurality of audio data segments, assessing whether the respective audio data segment is glitched (col. 3, line 27-col. 4, line 24), the assessing including: generating an image representing the audio data segment (Fig 3), the image including a spectrogram of the audio data segment (col. 4, line 11); providing, as an input to at least one model (processing device 140), the image including the spectrogram of the audio data segment; and generating, by the at least one model, a classification output indicating whether the audio data segment represented by the image is glitched (col. 4, line 20-24); generating one or more records identifying one or more audio data segments of the plurality of audio data segments, each of the one or more audio data segments classified as glitched by the at least one model (Fig 2); and determining a validation status of a system-under-test (SUT) based on the one or more records (column 1, lines 38-61).
With regard to claim 17, Yang et al discloses the glitch detection method of claim 15, further comprising, for each audio data segment of the plurality of audio data segments: extracting, from the audio data segment and/or from the image representing the audio data segment, one or more features (spectrogram); and providing, as one or more additional inputs to the at least one model, the one or more extracted features (col. 4, line 25).
With regard to claim 18, Yang et al discloses the glitch detection method of claim 17, wherein the one or more extracted features include a first feature indicating one or more frequency domain attributes of the image representing the audio data segment, a second feature indicating a plurality of pixel intensity gradients derived from the image representing the audio data segment, and/or a third feature characterizing anomalousness of a plurality of pixel intensity values derived from the image representing the audio data segment (considering the “and/or” language of the claim, the frequency domain attributes (spectrogram) of the image representing the audio data segment is taught by Yang et al, col. 3, line 65).
With regard to claim 19, Yang et al discloses the glitch detection method of claim 15, wherein: for each audio data segment of the one or more audio data segments classified as glitched by the at least one model, the classification output further indicates one or more probabilities of the audio data segment having a glitch of one or more audio glitch types, see col. 7, lines 4-18, “confidence level”).
With regard to claim 20, Yang et al discloses the glitch detection method of claim 19, wherein the one or more audio glitch types include a buzzing glitch type, an intermittent glitch type, a noise-mixing glitch type, a clipping glitch type (“buzzing and rub” col. 3, line 32).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Yang et al in view of US PGPub 2021/0366183 to Gisslen et al.
With regard to claim 16, Yang et al discloses a glitch detection system in audio that employs a classification model that analyzes spectrogram images of the audio, but fails to disclose the at least one model comprises a convolutional neural network (CNN).
On the other hand, Gisslen et al, in the same field of endeavor (glitch detection) discloses the use of a convolutional neural network (CNN) as the model being employed to detect glitches (see abstract of Gisslen et al).
It would have been obvious before the effective filing date of the claimed invention to have used the CNN disclosed by Gisslen et al as the model for glitch detection employed by Yang et al, with the rationale being that the CNN model would enable an improvement in glitch detection, as well as a reduction in false positives (see Gisslen et al, [0020]).
Claims 1-4, 6, 10-14 are rejected under 35 U.S.C. 103 as being unpatentable over the NPL article “Automatic Artifact Detection in Video Games” to Davarmanesh et al, hereinafter referred to as “Davarmanesh et al” in view of US PGPub 2021/0366183 to Gisslen et al.
With regard to claim 1, Davarmanesh et al discloses a computer-implemented glitch detection method, comprising: for each image of a plurality of images, assessing whether the respective image is glitched (see Abstract), the assessing including: extracting, from the image, one or more features (features are the types of defects, see section 2.1); providing, as a plurality of inputs to at least one model, the image and the one or more features extracted from the image (see section 4 “Classification”); and generating, by the at least one model, a classification output indicating whether the image is glitched (see section 6 “Results”); generating one or more records identifying one or more images of the plurality of images, each of the one or more images classified as glitched by the at least one model (see section 6.1).
However, Davarmanesh et al fails to disclose the environment being a validation status of a system-under-test (SUT).
However, Gisslen et al, also in the glitch detection art, discloses the automation of glitch detection when testing a system under test, see [0051] and [0088].
It would have been obvious before the effective filing date of the claimed invention to have employed the glitch ststem taught by Davarmanesh et al in a SUT environment as taught by Gisslen et al since doing this would provide for automation of the testing environment, thus removing dependency upon manual human interaction.
With regard to claim 2, the glitch detection method of claim 1, wherein the at least one model includes a neural network (see Davarmanesh et al, section 4.1 “CNN”), and wherein providing the image and the one or more features extracted from the image as the plurality of inputs to the at least one model includes providing the plurality of inputs to an input layer of the neural network (“resize to convolution to maxpool to convolution to maxpool to softmax”).
With regard to claim 3, the glitch detection method of claim 2, wherein the neural network is a convolutional neural network (CNN), wherein the CNN includes one or more convolutional layers including a first convolutional layer, and wherein the one or more features are inserted into the CNN at an input of the first convolutional layer (see claim 2 above).
With regard to claim 4, the glitch detection method of claim 1, further comprising obtaining first audiovisual data derived from second audiovisual data processed by a computer system, wherein the second audiovisual data include image data, and wherein the first audiovisual data include the plurality of images (see Davarmanesh et al, section 5.1 “Data”).
With regard to claim 6, the glitch detection method of claim 1, further comprising obtaining first audiovisual data derived from second audiovisual data processed by a computer system, wherein the second audiovisual data include audio data, and wherein the first audiovisual data include a set of images representing a respective set of segments of the audio data (see Davarmanesh et al, section 5.1)
With regard to claim 10, the glitch detection method of claim 1, wherein the one or more features include a first feature indicating one or more frequency domain attributes of the image, a second feature indicating a plurality of pixel intensity gradients derived from the image, and/or a third feature characterizing anomalousness of a plurality of pixel intensity values derived from the image (see Davarmanesh et al, section 3.1, 3.2 and 3.3).
With regard to claim 11, the glitch detection method of claim 10, wherein extracting the one or more features includes: extracting the first feature based on a Fourier transform to the image; extracting the second feature based on a histogram of orientations of the plurality of pixel intensity gradients; and/or extracting the third feature based on a plurality of anomaly scores of the respective plurality of pixel intensity values (see claim 10 above).
With regard to claim 12, the glitch detection method of claim 1, wherein: for each image of the one or more images classified as glitched by the at least one model, the classification output further indicates one or more probabilities of the image having a glitch of one or more image glitch types. While the primary reference to Davarmanesh et al is silent as to probabilities of glitches, Gisslen et al discloses at [0073] that confidence data can be output that provides a value as to how certain the system is that a glitch is actually present. Therefore, it would have been obvious before the effective filing date of the claimed invention to have provided the glitch system of Davarmanesh et al with the ability to provide a confidence score to the detected glitch as taught by Gisslen et al, with the rationale being that the confidence score will help reduce the number of false positives, see Gisslen et al at [0073].
With regard to claim 13, the glitch detection method of claim 12, wherein the one or more image glitch types include a striped merge glitch, a discoloration glitch type, a dotted line glitch type, a line pixelation glitch type, a Morse Code glitch type, a parallel line glitch type, radial dotted line glitch type, a random patch glitch type, a regular triangulation glitch type, a shader glitch type, a shape glitch type, a square patch glitch type, a stuttering glitch type, a texture pop in glitch type, and/or a triangle glitch type (see Davarmanesh et al, section 2.1.1 through 2.1.10).
Claim 14 is rejected for reasoning, mutatis mutandis, as that of claim 1 above.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Davarmanesh et al in view of Gisslen et al as applied to claims 1-4, 6, 10-14 above, and further in view of WO2021/179033A1, hereinafter referred to as “WO’033.”
With regard to claim 5, Davarmanesh et al in view of Gisslen et al disclose a glitch detection method, but however fail to disclose wherein the first audiovisual data are derived from the second audiovisual data via a deduplication process.
On the other hand, WO’033 discloses a defect detection system that analyzes images of pipes for defects [0094]-[00103]wherein a deduplication process is performed on classified artifact sets.
It would have been obvious before the effective filing date of the claimed invention to have used a deduplication process taught by WO’033 in the glitch detection method taught by Davarmanesh et al in view of Gisslen et al since doing this would naturally reduce the amount of duplicate data by pooling like images together.
Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Davarmanesh et al in view of Gisslen et al as applied to claims 1-4, 6, 10-14 above, and further in view of US Patent 10863294 to Yang et al.
With regard to claim 7, Davarmanesh et al in view of Gisslen et al fail to teach glitch analysis on audio data (Davarmanesh et al and Gisslen et al analyze image data) wherein a set of audio data segments are used to generate a set of images representing the respective set of audio data segments, wherein each image of the set of images corresponds to a respective audio data segment of the set of audio data segments and includes a spectrogram of the respective audio data segment.
Yang et al, in the same field of endeavor (glitch detection), discloses analyzing audio data segment of a plurality of audio data segments, assessing whether the respective audio data segment is glitched (col. 3, line 27-col. 4, line 24), the assessing including: generating an image representing the audio data segment (Fig 3), the image including a spectrogram of the audio data segment (col. 4, line 11).
It would have been obvious before the effective filing date of the claimed invention to have provided the glitched image detection method of Davarmanesh et al in view of Gisslen et al with the ability to analyze audio for glitches as taught by Yang et al since doing this would have provided a more comprehensive testing environment for glitches present in audiovisual data (ability to detect both audio glitches and image glitches without human intervention).
With regard to claim 8, Davarmanesh et al in view of Gisslen et al and Yang et al disclose the glitch detection method of claim 7, wherein: the plurality of images includes the set of images representing the respective set of audio data segments, the set of images includes a first image representing a first audio data segment, and classification, by the at least one model, of the first image as glitched indicates that the first audio data segment is glitched (see Yang et al, generating a classification output indicating whether the audio data segment represented by the image is glitched (col. 4, line 20-24)).
With regard to claim 9, Davarmanesh et al in view of Gisslen et al and Yang et al disclose the glitch detection method of claim 7, wherein the at least one model comprises at least one first model (image assessing CNN models taught by Davarmanesh et al), wherein the one or more records comprise one or more first records, and wherein the method further comprises: for each audio data segment of the set of audio data segments, assessing whether the respective audio data segment is glitched, including: providing, as an input to at least one second model (“second model” being the machine vision model in Yang et al, Fig 2, Step S210), the image including the spectrogram (Yang et al, step S208) of the audio data segment; and generating, by the at least one second model, a classification output indicating whether the audio data segment represented by the image is glitched; generating one or more records second identifying one or more audio data segments of the set of audio data segments, each of the one or more audio data segments classified as glitched by the at least one second model; and providing the one or more second records to a user. See Yang et al, Fig 2 and column 1, lines 38-61 for device under test.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The cited NPL “Visual Glitches Classification for Video Game Using Deep Learning-based Techniques” describes various types of glitches commonly found in video games (the video) during game play and the use of neural networks employed to detect said glitches.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID OMETZ whose telephone number is (571)272-7593. The examiner can normally be reached M-F, 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
DAVID OMETZ
Primary Examiner
Art Unit 2672
/DAVID OMETZ/Primary Examiner, Art Unit 2672