DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/24/2025 and 01/16/2026 was considered by the examiner.
Drawings
The drawings were received on 01/24/2026. These drawings are acceptable.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “decoding device is configured to perform…” in claim 21 and “coding device is configured to perform…” in claim 22.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-14 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 12, and 18 recite the limitation “the predicted residual value of the first pixel point”. There is insufficient antecedent basis for this limitation in the claims. Dependent claims 2-11 and 13-14 fall together accordingly.
The term “just noticeable” in claim 9 is a relative term which renders the claim indefinite. The term “just noticeable” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-8, 12-14, 18, and 20-24 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Wee US 2019/0206092 A1, hereafter Wee.
Regarding claim 1, Wee discloses a picture decoding method, performed by a decoding device (image processing method and system; at the time when decoding is implemented, accordingly, inverse quantization can be performed with the quantization values) [title; 0006], comprising:
determining a predicted residual value of a pixel point according to a prediction manner of the pixel point (prediction value pred(Pk)) [0055], wherein the predicted residual value is configured to reflect a gradient of the pixel point (the pixel data of the pixels may be represented with the pixel values Pk or differential values dk. The differential value dk is defined as a difference value between the pixel value Pk of the corresponding pixel and a prediction value pred(Pk) of the corresponding pixel) [0055], the prediction manner is configured to indicate a position of one or more reconstructed pixel points referenced when performing prediction on a pixel point (the image processing system 100 check the pixel data of a influencing pixel used in determining the quantization value of the first pixel 14 among the surround pixels…around the first pixel 14) [0058], and the pixel point is any pixel point in a current coding block (processing order of first pixel S100) [FIG. 2];
determining a target quantization parameter (QP) value of a first pixel point according to the predicted residual value of the first pixel point (calculating quantization/inverse quantization value S120; the quantization value of the first pixel 14 can be defined as a given function with the pixel values…of influencing pixels…as parameters) [FIG. 2; 0070], wherein the first pixel point is a target pixel point in the current coding block (quantization is performed in the unit of a given block) [0005], and the target pixel point is a preset pixel point for adjusting the QP value (the influencing pixel may be a pixel set in advanced to be used in determining the quantization value of the first pixel 14) [0059]; and
performing dequantization on the first pixel point according to the target QP value of the first pixel point (quantization/inverse quantization S130) [FIG. 2].
Regarding claim 2, Wee addresses all of the features with respect to claim 1 as outlined above.
Wee further discloses the prediction manner comprises at least one of:
performing prediction according to reconstructed pixel points on a left side and a right side of the pixel point; performing prediction according to one or more reconstructed pixel points on the left side of the pixel point; or performing prediction according to one or more reconstructed pixel points on an upper side of the pixel point (as shown in FIGS. 3 and 4…given pixels 10, 11, 12, 13, 14, 15, 16, 17, and 18 ;when the decoding is implemented…the pixel decoded before the first pixel 14, that is, at least one of the decoding precedent pixels is determined as the influencing pixel) [0055; 0061].
Regarding claim 3, Wee addresses all of the features with respect to claim 2 as outlined above.
Wee further discloses the prediction manner of the pixel point is to perform prediction according to the reconstructed pixel points on the left side and the right side of the pixel point, and determining the predicted residual value of the pixel point according to the prediction manner of the pixel point comprises:
calculating a difference between a pixel value of a second pixel point and a pixel value of a third pixel point as a first difference value, or taking a residual value of the second pixel point after performing dequantization as the first difference value, wherein the second pixel point is a first reconstructed pixel point on a left side of the pixel point, and the third pixel point is a first reconstructed pixel point on an upper side of the second pixel point (as shown in FIGS. 3 and 4…given pixels 10, 11, 12, 13, 14, 15, 16, 17, and 18 are located on original images…the pixel data of the pixels may be represented with the pixel values Pk or differential values dk. The differential value dk is defined as a difference value between the pixel value Pk of the corresponding pixel and a prediction value pred(Pk) of the corresponding pixel; the image processing system 100 is not limited to using only the directly adjacent pixels to the first pixel 14 as the surrounding pixels around the first pixel 14; as shown in FIG. 5, the surrounding pixels around a given first pixel 20 are defined as the directly adjacent pixels) [0055; 0080; 0081];
calculating a difference between a pixel value of a fourth pixel point and a pixel value of a fifth pixel point as a second difference value, or taking a residual value of the fourth pixel point after performing dequantization as the second difference value, wherein the fourth pixel point is a first reconstructed pixel point on a right side of the pixel point, and the fifth pixel point is a first reconstructed pixel point on an upper side of the fourth pixel point (as shown in FIGS. 3 and 4…given pixels 10, 11, 12, 13, 14, 15, 16, 17, and 18 are located on original images…the pixel data of the pixels may be represented with the pixel values Pk or differential values dk. The differential value dk is defined as a difference value between the pixel value Pk of the corresponding pixel and a prediction value pred(Pk) of the corresponding pixel; the image processing system 100 is not limited to using only the directly adjacent pixels to the first pixel 14 as the surrounding pixels around the first pixel 14; as shown in FIG. 5, the surrounding pixels around a given first pixel 20 are defined as the directly adjacent pixels) [0055; 0080; 0081]; and
taking an average value of an absolute value of the first difference value and an absolute value of the second difference value as the predicted residual value of the pixel point (the image complexity can be processed using the average, deviation, or variance of differences between the pixel values (for example, abs(p1-p2), abs(p2-p3), abs(pe-p4), abs(p4-p1), here abs( ) means absolute value) of the influencing pixels) [0069].
Regarding claim 4, Wee addresses all of the features with respect to claim 2 as outlined above.
Wee further discloses the prediction manner of the pixel point is to perform prediction according to the one or more reconstructed pixel points on the left side of the pixel point, and determining the predicted residual value of the pixel point according to the prediction manner of the pixel point comprises:
taking an absolute value of a difference between a pixel value of a tenth pixel point and a pixel value of an eleventh pixel point as the predicted residual value of the pixel point, wherein the tenth pixel point is a first reconstructed pixel point on a left side of the pixel point, and the eleventh pixel point is a first reconstructed pixel point on a left side of the tenth pixel point (as shown in FIGS. 3 and 4…given pixels 10, 11, 12, 13, 14, 15, 16, 17, and 18 are located on original images…the pixel data of the pixels may be represented with the pixel values Pk or differential values dk. The differential value dk is defined as a difference value between the pixel value Pk of the corresponding pixel and a prediction value pred(Pk) of the corresponding pixel; the image processing system 100 is not limited to using only the directly adjacent pixels to the first pixel 14 as the surrounding pixels around the first pixel 14; as shown in FIG. 5, the surrounding pixels around a given first pixel 20 are defined as the directly adjacent pixels) [0055; 0080; 0081]; or
taking an absolute value of a difference between the pixel value of the tenth pixel point and a pixel value of a twelfth pixel point as the predicted residual value of the pixel point, wherein the twelfth pixel point is a first reconstructed pixel point on an upper side of the tenth pixel point; or
taking an absolute value of a residual value of the tenth pixel point after performing dequantization as the predicted residual value of the pixel point (the image complexity can be processed using the average, deviation, or variance of differences between the pixel values (for example, abs(p1-p2), abs(p2-p3), abs(pe-p4), abs(p4-p1), here abs( ) means absolute value) of the influencing pixels) [0069].
Regarding claim 5, Wee addresses all of the features with respect to claim 2 as outlined above.
Wee further discloses the prediction manner of the pixel point is to perform prediction according to the one or more reconstructed pixel points on the upper side of the pixel point, and determining the predicted residual value of the pixel point according to the prediction manner of the pixel point comprises:
taking an absolute value of a difference between a pixel value of a thirteenth pixel point and a pixel value of a fourteenth pixel point as the predicted residual value of the pixel point, wherein the thirteenth pixel point is a first reconstructed pixel point on an upper side of the pixel pixel point, and the fourteenth pixel point is a first reconstructed pixel point on an upper side of the thirteenth pixel point (as shown in FIGS. 3 and 4…given pixels 10, 11, 12, 13, 14, 15, 16, 17, and 18 are located on original images…the pixel data of the pixels may be represented with the pixel values Pk or differential values dk. The differential value dk is defined as a difference value between the pixel value Pk of the corresponding pixel and a prediction value pred(Pk) of the corresponding pixel; the image processing system 100 is not limited to using only the directly adjacent pixels to the first pixel 14 as the surrounding pixels around the first pixel 14; as shown in FIG. 5, the surrounding pixels around a given first pixel 20 are defined as the directly adjacent pixels) [0055; 0080; 0081]; or
taking an absolute value of a residual value of the thirteenth pixel point after performing dequantization as the predicted residual value of the pixel point (the image complexity can be processed using the average, deviation, or variance of differences between the pixel values (for example, abs(p1-p2), abs(p2-p3), abs(pe-p4), abs(p4-p1), here abs( ) means absolute value) of the influencing pixels) [0069].
Regarding claim 6, Wee addresses all of the features with respect to claim 1 as outlined above.
Wee further discloses the predicted residual value of the pixel point comprises:
a target value or an average value of target values, wherein the target value is a gradient of one or more reconstructed pixel points surrounding the pixel point, or the target value is an absolute value of the gradient of the one or more reconstructed pixel points surrounding the pixel point, or the target value is a residual value of the one or more reconstructed pixel points surrounding the(the image complexity can be processed using the average, deviation, or variance of differences between the pixel values (for example, abs(p1-p2), abs(p2-p3), abs(pe-p4), abs(p4-p1), here abs( ) means absolute value) of the influencing pixels) [0069].
Regarding claim 7, Wee addresses all of the features with respect to claim 1 as outlined above.
Wee further discloses determining a predicted QP value of the pixel point, wherein the predicted QP value of the pixel point is a QP value of the current coding block (so as to decode the first pixel 14, accordingly, the image processing system 100 performs a process for obtaining the pixel value P’k of the first pixel 14 as follows) [0090],
wherein determining the target QP value of the first pixel point according to the predicted residual value of the first pixel point comprises:
adjusting the predicted QP value of the first pixel point according to the predicted residual value of the first pixel point to obtain the target QP value of the first pixel point (if dk and pred(Pk) are obtained, accordingly the processing system 100 can obtain the pixel value P’k; the quantization value of the first pixel 14 is determined on the basis of the influencing pixels (for example, at least one of pixel 10, 11, 12, and 13); while the quantization is being implemented the quantization values of the given pixel is determined based on the pixel data of the influencing pixels…the quantization values of given pixel can be recognized in the same manner, and if quantization value of the given pixel is recognized, their inverse quantization can be implemented with the same values) [0092; 0093; 0094].
Regarding claim 8, Wee addresses all of the features with respect to claim 7 as outlined above.
Wee further discloses adjusting the predicted QP value of the first pixel point according to the predicted residual value of the first pixel point to obtain the target QP value of the first pixel point comprises:
in response to the predicted QP value of the first pixel point being greater than or equal to a first threshold and less than or equal to a second threshold and the predicted residual value of the first pixel point being less than or equal to a third threshold, adjusting the predicted QP value of the first pixel point to obtain the target QP value, wherein the target QP value is less than the predicted QP value (determine the quantization value in proportion to the image complexity of the area corresponding to the first pixel 14 and the influencing pixels) [0066]; and
otherwise, taking the predicted QP value of the first pixel point as the target QP value of the first pixel point (the quantization value of the first pixel 14 can be defined as a given function with the pixel values…of the influencing pixels…as parameters) [0070].
Regarding claim 12, Wee discloses a picture coding method, performed by a coding device (image processing method and system) [title], comprising:
determining a predicted residual value of a pixel point according to a prediction manner of the pixel point (prediction value pred(Pk)) [0055], wherein the predicted residual value is configured to reflect a gradient of the pixel point (the pixel data of the pixels may be represented with the pixel values Pk or differential values dk. The differential value dk is defined as a difference value between the pixel value Pk of the corresponding pixel and a prediction value pred(Pk) of the corresponding pixel) [0055], the prediction manner is configured to indicate a position of one or more reconstructed pixel points referenced when performing prediction on a pixel point (the image processing system 100 check the pixel data of a influencing pixel used in determining the quantization value of the first pixel 14 among the surround pixels…around the first pixel 14) [0058], and the pixel point is any pixel point in a current coding block (processing order of first pixel S100) [FIG. 2];
determining a target quantization parameter (QP) value of a first pixel point according to the predicted residual value of the first pixel point (calculating quantization/inverse quantization value S120; the quantization value of the first pixel 14 can be defined as a given function with the pixel values…of influencing pixels…as parameters) [FIG. 2; 0070], wherein the first pixel point is a target pixel point in the current coding block (quantization is performed in the unit of a given block) [0005], and the target pixel point is a preset pixel point for adjusting the QP value (the influencing pixel may be a pixel set in advanced to be used in determining the quantization value of the first pixel 14) [0059]; and
performing quantization on the first pixel point according to the target QP value of the first pixel point (quantization/inverse quantization S130) [FIG. 2].
Regarding claim 13, Wee addresses all of the features with respect to claim 12 as outlined above.
Wee further discloses the predicted residual value of the pixel point comprises:
a target value or an average value of target values, wherein the target value is a gradient of one or more reconstructed pixel points surrounding the pixel point, or the target value is an absolute value of the gradient of the one or more reconstructed pixel points surrounding the pixel pixel point, or the target value is a residual value of the one or more reconstructed pixel points surrounding the pixel point after performing dequantization, or the target value is an absolute value of the residual value of the one or more reconstructed pixel points surrounding the pixel point after performing dequantization (the image complexity can be processed using the average, deviation, or variance of differences between the pixel values (for example, abs(p1-p2), abs(p2-p3), abs(pe-p4), abs(p4-p1), here abs( ) means absolute value) of the influencing pixels) [0069].
Regarding claim 14, Wee addresses all of the features with respect to claim 12 as outlined above.
Wee further discloses determining a predicted QP value of the pixel point, wherein the predicted QP value of the pixel point is a QP value of the current coding block (so as to decode the first pixel 14, accordingly, the image processing system 100 performs a process for obtaining the pixel value P’k of the first pixel 14 as follows) [0090],;
wherein determining the target QP value of the first pixel point according to the predicted residual value of the first pixel point comprises:
adjusting the predicted QP value of the first pixel point according to the predicted residual value of the first pixel point to obtain the target QP value of the first pixel point (if dk and pred(Pk) are obtained, accordingly the processing system 100 can obtain the pixel value P’k; the quantization value of the first pixel 14 is determined on the basis of the influencing pixels (for example, at least one of pixel 10, 11, 12, and 13); while the quantization is being implemented the quantization values of the given pixel is determined based on the pixel data of the influencing pixels…the quantization values of given pixel can be recognized in the same manner, and if quantization value of the given pixel is recognized, their inverse quantization can be implemented with the same values) [0092; 0093; 0094].
Claim 18 is drawn to an electronic device adapted to implement the method of claim 1, and is therefore rejected in the same manner as above. However, the claim also recites a processor and a memory, which Wee also teaches (processor 110, memory 120) [FIG. 1].
Claim 20 is drawn to an electronic device adapted to implement the method of claim 12, and is therefore rejected in the same manner as above. However, the claims also recites a processor and a memory, which Wee also teaches (processor 110, memory 120) [FIG. 1].
Claim 21 is drawn to a coding system adapted to implement the method of claim 1, and is therefore rejected in the same manner as above. However, the claims also recite a coding device, which Wee also teaches (image processing system 100 refers to a data processing device capable of decoding and decoding an image) [0040].
Claim 22 is drawn to a coding system adapted to implement the method of claim 12, and is therefore rejected in the same manner as above. However, the claims also recite a decoding device, which Wee also teaches (image processing system 100 refers to a data processing device capable of decoding and decoding an image) [0040].
Computer readable medium claim 23 is drawn to the instructions corresponding to the method of claim 1. Therefore, computer readable medium claim 23 corresponds to method claim 1 and is rejected for the same reasons of unpatentability as used above.
Computer readable medium claim 24 is drawn to the instructions corresponding to the method of claim 12. Therefore, computer readable medium claim 24 corresponds to method claim 12 and is rejected for the same reasons of unpatentability as used above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Wee in view of Lin et al. US 2021/0337205 A1, hereafter Lin.
Regarding claim 9, Wee addresses all of the features with respect to claim 8 as outlined above.
However, Wee fails to explicitly disclose the first threshold is a QP value corresponding to just noticeable distortion, and the second threshold is an adjustable maximum QP value.
Lin, in an analogous environment, discloses the first threshold is a QP value corresponding to just noticeable distortion, and the second threshold is an adjustable maximum QP value (determine, based on the determined gradient value and the gradient direction complexity, the JND threshold corresponding to each pixel point in the at least one pixel block S204; determine a block JND threshold corresponding to each pixel block, based on the JND threshold corresponding to each pixel point in the at least one pixel block S 205) [FIG. 2].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the threshold, as disclosed by Lin, with the invention disclosed by Wee, the motivation being improving speed [0039].
Allowable Subject Matter
Claims 10 and 11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Citation of Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Galpin et al. US 2018/0255302 A1 discloses adjusting a quantization parameter for encoding and decoding
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEFAN GADOMSKI whose telephone number is (571)270-5701. The examiner can normally be reached Monday - Friday, 12-8PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
STEFAN GADOMSKI
Primary Examiner
Art Unit 2485
/STEFAN GADOMSKI/Primary Examiner, Art Unit 2485