DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Preliminary Amendment
The preliminary amendment received 06/04/2024 has been entered.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
Such claim limitations are:
“a determination unit “, “a transfer unit”, and “an information management apparatus” for claims 1-14;
“a determination unit “ and “a transfer unit” for claims 15-18;
“a generation unit” for claims 16-17;
“an assignment unit” in claim 18.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 9-10, and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Honda (US 20210227129 A1) and in view of Satoshi (US 20080129728 A1).
Regarding to claim 1, Honda discloses an image management apparatus (Fig. 1A; [0058]: an image processing system; Fig. 1B; [0090]: the image capture apparatus 100 transmits the proxy image data recorded by the recording circuit 108 from the communication circuit 110 to the server 200 over the network 300; [0114]: the image capture apparatus 100 causes the display circuit 109 to display the image data for display, and causes the recording circuit 108 to record the image data for recording; Fig. 5A; [0119]: the image processing performed on the proxy image data is performed by an edge device 400; [0126]: an image processing circuit 405 generates signals and image data, obtains and generates various types of information; [0148]: the image capture apparatus 100 transmits a plurality of frames' worth of RAW data to the edge device 400) comprising:
a determination unit (Fig. 1B; [0089]: CPU; the control circuit 101 and the image processing circuit; Fig. 5A; Fig. 5B; [0119-0120]) configured to determine a type of a target image represented by image data transmitted from an external apparatus (Fig. 1A; [0058]: first and second external apparatuses; the image capture apparatus 100 and the server apparatus 200 communicate with each other using a communication protocol based on the type of the network;
PNG
media_image1.png
178
668
media_image1.png
Greyscale
; [0076]: the proxy image data is image data having a lower data amount than the captured image data for recording and the RAW data; moving image data; Fig. 2; [0090]: the image capture apparatus 100 determines and transmits the proxy image data recorded by the recording circuit 108 from the communication circuit 110 to the server 200 over the network 300; Fig. 2; [0106]: the image capture apparatus 100 determines the RAW data; the original data of a proxy image associated with one or more person IDs is determined to be the image data; [0107]: the image capture apparatus 100 determines the proxy image data; [0110]: an image data file in a predetermined format such as the JPEG format; [0112]: the server 200 transmits the image data with JPEG format; [0119]); and
a transfer unit (Fig. 1B; [0089-0090]: CPU and the communication circuit 110; Fig. 5A; Fig. 5B; [0119-0120]) configured to transfer, in a case where the type of the target image determined by the determination unit is an original image, the image data of the target image to an information management apparatus (Fig. 2; [0108]: the image capture apparatus 100 transmits the RAW data to the server 200; Fig. 8; [0105]: the image capture apparatus 100 transmits the RAW data to the edge device; Fig. 2; [0106]: the image capture apparatus 100 determines the RAW data; the original data of a proxy image associated with one or more person IDs is determined to be the image data; [0108]: the image capture apparatus 100 transmits the RAW data to the server 200 and stands by to receive a developing result).
Honda fails to explicitly disclose:
configured to generate, from an input image, a proxy image having a smaller amount of data than the input image.
In same field of endeavor, Satoshi teaches:
configured to generate, from an input image, a proxy image having a smaller amount of data than the input image (Fig. 2; [0056]: the 3D image creation section 110 receives multiple images captured by the imaging sections;
PNG
media_image2.png
446
754
media_image2.png
Greyscale
; Fig. 2; [0058]: the thumbnail image creation section 120 creates multiple types of thumbnail images corresponding to the 3D image; thumbnail images have smaller amount of data; [0096]: the thumbnail image reproduction section 350 displays the thumbnail image inputted via the thumbnail image selection section 330 on a display device).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Honda to include configured to generate, from an input image, a proxy image having a smaller amount of data than the input image as taught by Satoshi. The motivation for doing so would have been to receive multiple images captured by the imaging sections; to create multiple types of thumbnail images corresponding to the 3D image, on the basis of the integrated image created by the 3D image creation section 110 and the images from the multiple viewpoints; to manage the multiple thumbnail images; to identify the type of each thumbnail image and an offset indicating the position as taught by Satoshi in paragraphs [0056], [0058], [0064], and [0068].
Regarding to claim 2, Honda in view of Satoshi discloses the image management apparatus according to claim 1, wherein
the determination unit is configured to determine the type of the target image based on transmission information transmitted from the external apparatus in association with the target image (Honda; [0076]: the proxy image data is image data having a lower data amount than the captured image data for recording and the RAW data; Fig. 2; [0106]: the image capture apparatus 100 determines the RAW data with larger data size; [0108]: transmit the RAW data to the server 200; [0110]: an image data file in a predetermined format such as the JPEG format; [0112]: the server 200 transmits the image data with JPEG format).
Regarding to claim 3, Honda in view of Satoshi discloses the image management apparatus according to claim 2, wherein
the transmission information includes at least attribute information indicating a property of the target image (Honda; [0096]: determine that the detected facial region is the facial region of a registered person; [0110]: an image data file in a predetermined format such as the JPEG format; [0112]: the server 200 transmits the image data with JPEG format), and
the determination unit is configured to determine the type of the target image based on the attribute information (Honda; [0012]: receive image data having a higher resolution than the reduced image data from the information processing apparatus through the communication circuit; [0076]: the proxy image data is image data that has a lower resolution; [0077]: the display circuit 109 displays captured images, information obtained from captured images, e.g., a luminance histogram, setting values of the image capture apparatus 100, and GUI elements; [0110]: an image data file in a predetermined format such as the JPEG format; [0112]: the server 200 transmits the image data with JPEG format).
Honda in view of Satoshi further discloses:
the determination unit is configured to determine the type of the target image based on the attribute information (Satoshi; Fig. 5A; [0072]: 3D image; Fig. 5B; Fig. 5C; Fig. 5D; [0073-0075]: the frames of multiple images, 3D image, captured from multiple viewpoints are sequentially combined;
PNG
media_image3.png
166
518
media_image3.png
Greyscale
; [0076]: the thumbnail image identifier, type ID, of the thumbnail image in the 3D image file is recorded; [0084]: the extension of the 3D image file is jpg, and the extension of the thumbnail image file is th3).
Same motivation of claim 1 is applied here.
Regarding to claim 4, Honda in view of Satoshi discloses the image management apparatus according to claim 3, wherein
the attribute information includes at least one of a horizontal resolution of the target image, a vertical resolution of the target image, and a bit rate of the target image (Honda; [0012]: receive image data having a higher resolution than the reduced image data from the information processing apparatus through the communication circuit; [0076]: the proxy image data is image data that has a lower resolution).
Regarding to claim 5, Honda in view of Satoshi discloses the image management apparatus according to claim 3, wherein
the determination unit is configured to determine the type of the target image based on the attribute information and a model identifier corresponding to a model of an image capturing apparatus that has captured the target image (Honda; [0025]: receive information that identifies selected image data and a partial region of the selected image data, from the first external apparatus; [0094]: the server 200 adds unique identification information (ID) to the feature information of each person; Fig. 4; [0097]: add a tag to the proxy image data on the basis of the result of the facial authentication processing).
Honda in view of Satoshi further discloses the determination unit is configured to determine the type of the target image based on the attribute information and a model identifier corresponding to a model of an image capturing apparatus that has captured the target image (Satoshi; Fig. 2; [0056]: multiple images captured by the imaging sections 10 and 12 arranged at multiple viewpoint positions; Fig. 4; [0067-0068]: identify the type of each thumbnail image and an offset indicating the position; [0084]: the extension of the 3D image file is jpg, and the extension of the thumbnail image file is th3; Fig. 5A-5D; [0072]: one integrated image, 3D image, is recorded after the header where tag data is recorded;
PNG
media_image4.png
124
388
media_image4.png
Greyscale
;
PNG
media_image5.png
146
312
media_image5.png
Greyscale
).
Regarding to claim 9, Honda in view of Satoshi discloses the image management apparatus according to claim 2, wherein the transmission information includes at least information indicating the type of the target image (Honda; [0094]: the server 200 adds unique identification information (ID) to the feature information of each person; Fig. 4; [0097]: add a tag to the proxy image data on the basis of the result of the facial authentication processing).
Regarding to claim 10, Honda in view of Satoshi discloses the image management apparatus according to claim 1, wherein the determination unit is configured to determine the type of the target image based on accompanying information assigned to the target image (Honda; [0077]: the display circuit 109 displays captured images, information obtained from captured images, e.g., a luminance histogram, setting values of the image capture apparatus 100, and GUI elements; [0094]: the server 200 adds unique identification information (ID) to the feature information of each person; Fig. 4; [0097]: add a tag to the proxy image data on the basis of the result of the facial authentication processing; [0106]: the original data of a proxy image associated with one or more person IDs is determined to be the image data; [0110]: an image data file in a predetermined format such as the JPEG format).
Honda in view of Satoshi further discloses wherein the determination unit is configured to determine the type of the target image based on accompanying information assigned to the target image (Satoshi; Fig. 4; [0067-0068]: identify the type of each thumbnail image and an offset indicating the position; [0084]: the extension of the 3D image file is jpg, and the extension of the thumbnail image file is th3; Fig. 5A-5D; [0072]).
Regarding to claim 12, Honda in view of Satoshi discloses the image management apparatus according to claim 2, wherein
an image identifier used to identify the target image transmitted from the external apparatus, reference destination information indicating a reference destination of the image data of the target image (Honda; [0025]: receive information that identifies selected image data and a partial region of the selected image data, from the first external apparatus; [0094]: the server 200 adds unique identification information (ID) to the feature information of each person; Fig. 4; [0097]: add a tag to the proxy image data on the basis of the result of the facial authentication processing; [0138]: the image capture apparatus 100 transmits the RAW data which has been determined to the server 200; Fig. 8; [0150]: the image capture apparatus 100 transmits the RAW data to the edge device 400), and
information indicating the type of the target image are associated with each other and retained, and at least either the reference destination of the image data of the target image or the type of the target image is determined based on the image identifier (Satoshi; Fig. 2; [0056]: multiple images captured by the imaging sections 10 and 12 are arranged at multiple viewpoint positions; Fig. 4; [0067-0068]: identify the type of each thumbnail image and an offset indicating the position; [0084]: the extension of the 3D image file is jpg, and the extension of the thumbnail image file is th3; Fig. 5A-5D; [0072]: one integrated image, 3D image, is recorded after the header where tag data is recorded;
PNG
media_image4.png
124
388
media_image4.png
Greyscale
;)
PNG
media_image5.png
146
312
media_image5.png
Greyscale
).
Same motivation of claim 1 is applied here.
Regarding to claim 13, Honda in view of Satoshi discloses the image management apparatus according to claim 12, wherein
in a case where the type of the target image associated with the image identifier is the original image, the transfer unit transfers, to the information management apparatus, the image data of the target image indicated by the reference destination information associated with the image identifier (Honda; [0025]: receive information that identifies selected image data and a partial region of the selected image data, from the first external apparatus; [0094]: the server 200 adds unique identification information (ID) to the feature information of each person; Fig. 4; [0097]: add a tag to the proxy image data on the basis of the result of the facial authentication processing; [0110]: an image data file in a predetermined format such as the JPEG format; [0138]: the image capture apparatus 100 transmits the RAW data which has been determined to the server 200; Fig. 8; [0150]: the image capture apparatus 100 transmits the RAW data to the edge device 400).
Regarding to claim 14, Honda in view of Satoshi discloses the image management apparatus according to claim 1, wherein the type of the target image is any one of an original image, a proxy image, and a secondary image (Honda; Fig. 2; [0090]: the image capture apparatus 100 determines and transmits the proxy image data recorded by the recording circuit 108 from the communication circuit 110 to the server 200 over the network 300; Fig. 2; [0106]: the image capture apparatus 100 determines the image data, i.e. RAW data, to which image processing is to be applied by the server 200; the original data of a proxy image associated with one or more person IDs is determined to be the image data; [0138]: the image capture apparatus 100 transmits the RAW data which has been determined to the server 200; Fig. 8; [0150]: the image capture apparatus 100 transmits the RAW data to the edge device 400).
Regarding to claim 15, Honda discloses an image management system (Fig. 1A; [0058]: an image processing system; Fig. 1B; [0090]: the image capture apparatus 100 transmits the proxy image data recorded by the recording circuit 108 from the communication circuit 110 to the server 200 over the network 300; [0114]: the image capture apparatus 100 causes the display circuit 109 to display the image data for display, and causes the recording circuit 108 to record the image data for recording; Fig. 5A; [0119]: the image processing performed on the proxy image data is performed by an edge device 400; [0126]: an image processing circuit 405 generates signals and image data, obtains and generates various types of information; [0148]: the image capture apparatus 100 transmits a plurality of frames' worth of RAW data to the edge device 400) comprising:
a terminal device (Fig. 1A; [0058]: an image capture apparatus 100 and a server apparatus 200 serving as an external apparatus; first and second external apparatuses; Fig. 5A; [0119]: an edge device); and
an image management apparatus configured to transfer image data transmitted from the terminal device to an information management apparatus (Fig. 1A; [0058]: first and second external apparatuses; the image capture apparatus 100 and the server apparatus 200 communicate with each other using a communication protocol based on the type of the network;
PNG
media_image1.png
178
668
media_image1.png
Greyscale
; Fig. 2; [0108]: the image capture apparatus 100 transmits the RAW data to the server 200; Fig. 8; [0105]: the image capture apparatus 100 transmits the RAW data to the edge device; [0138]: the image capture apparatus 100 transmits the RAW data which has been determined to the server 200; Fig. 8; [0150]: the image capture apparatus 100 transmits the RAW data to the edge device 400),
In same field of endeavor, Satoshi teaches:
configured to generate, from an input image, a proxy image having a smaller amount of data than the input image (Fig. 2; [0056]: the 3D image creation section 110 receives multiple images captured by the imaging sections; Fig. 2; [0058]: the thumbnail image creation section 120 creates multiple types of thumbnail images corresponding to the 3D image),
the rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 15.
Regarding to claim 16, Honda in view of Satoshi discloses the image management system according to claim 15, wherein the terminal device includes a generation unit configured to generate, in association with the target image, transmission information including information regarding the type of the target image (Honda; Honda; [0012]: receive image data having a higher resolution than the reduced image data from the information processing apparatus through the communication circuit; [0076]: the proxy image data is image data having a lower data amount than the captured image data for recording and the RAW data; Fig. 2; [0106]: the image capture apparatus 100 determines the image data, i.e. RAW data, to which image processing is to be applied by the server 200; the original data of a proxy image associated with one or more person IDs is determined to be the image data; [0108]: transmit the RAW data to the server 200), and
The rest claim limitations are similar to claim limitation recited in claim 2 and claim 3. Therefore, same rational used to reject claim 2 and claim 3 is also used to reject claim 16.
Regarding to claim 17, Honda in view of Satoshi discloses the image management system according to claim 16, wherein
the generation unit is configured to determine the type of the target image based on a storage area where the image data of the target image is stored in the terminal device, and to generate the transmission information including information indicating the type of the target image corresponding to a result of the determination (Satoshi; [0068]: the management information has a thumbnail image identifier, type ID, for identifying the type of each thumbnail image and an offset indicating the position in the file; Fig. 5A; [0072]: 3D image; Fig. 5B; Fig. 5C; Fig. 5D; [0073-0075]: the frames of multiple images, 3D image, captured from multiple viewpoints are sequentially combined;
PNG
media_image6.png
196
242
media_image6.png
Greyscale
; [0076]: the thumbnail image identifier, type ID, of the thumbnail image in the 3D image file is recorded).
Regarding to claim 18, Honda in view of Satoshi discloses the image management system according to claim 15, wherein
the terminal device includes an assignment unit configured to assign, to the target image, accompanying information including information indicating the type of the target image, and the determination unit is configured to determine the type of the target image based on the accompanying information assigned to the target image (Honda; [0077]: the display circuit 109 displays captured images, information obtained from captured images, e.g., a luminance histogram, setting values of the image capture apparatus 100, and GUI elements; [0094]: the server 200 adds unique identification information (ID) to the feature information of each person; Fig. 4; [0097]: add a tag to the proxy image data on the basis of the result of the facial authentication processing; [0106]: the original data of a proxy image associated with one or more person IDs is determined to be the image data; [0110]: an image data file in a predetermined format such as the JPEG format).
Honda in view of Satoshi further discloses the terminal device includes an assignment unit configured to assign, to the target image, accompanying information including information indicating the type of the target image, and the determination unit is configured to determine the type of the target image based on the accompanying information assigned to the target image (Satoshi; Fig. 4; [0067-0068]: identify the type of each thumbnail image and an offset indicating the position; Fig. 5A-5D; [0072]; [0084]: the extension of the 3D image file is jpg, and the extension of the thumbnail image file is th3).
Same motivation of claim 15 is applied here.
Regarding to claim 19, Honda discloses an image management apparatus control method (Fig. 1A; [0058]: an image processing system; Fig. 1B; [0090]: the image capture apparatus 100 transmits the proxy image data recorded by the recording circuit 108 from the communication circuit 110 to the server 200 over the network 300; [0114]: the image capture apparatus 100 causes the display circuit 109 to display the image data for display, and causes the recording circuit 108 to record the image data for recording; Fig. 5A; [0119]: the image processing performed on the proxy image data is performed by an edge device 400; [0126]: an image processing circuit 405 generates signals and image data, obtains and generates various types of information; [0148]: the image capture apparatus 100 transmits a plurality of frames' worth of RAW data to the edge device 400) comprising:
The rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 19.
Regarding to claim 20, Honda discloses a non-transitory storage medium storing a program causing an image management apparatus to execute a control method, the control method (Fig. 1A; [0058]: an image processing system; Fig. 1B; [0090]: the image capture apparatus 100 transmits the proxy image data recorded by the recording circuit 108 from the communication circuit 110 to the server 200 over the network 300; [0114]: the image capture apparatus 100 causes the display circuit 109 to display the image data for display, and causes the recording circuit 108 to record the image data for recording; Fig. 5A; [0119]: the image processing performed on the proxy image data is performed by an edge device 400; [0126]: an image processing circuit 405 generates signals and image data, obtains and generates various types of information; [0148]: the image capture apparatus 100 transmits a plurality of frames' worth of RAW data to the edge device 400) comprising:
The rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 20.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Honda (US 20210227129 A1) in view of Satoshi (US 20080129728 A1), and further in view of Li (US 20110243372 A1).
Regarding to claim 11, Honda in view of Satoshi discloses the image management apparatus according to claim 10, wherein
the accompanying information includes at least partial information to which information indicating the type of the target image is assigned, out of an identification name of the target image, part of the identification name of the target image (Honda; [0077]: the display circuit 109 displays captured images, information obtained from captured images, e.g., a luminance histogram, setting values of the image capture apparatus 100, and GUI elements; [0094]: the server 200 adds unique identification information (ID) to the feature information of each person; Fig. 4; [0097]: add a tag to the proxy image data on the basis of the result of the facial authentication processing; [0106]: the original data of a proxy image associated with one or more person IDs is determined to be the image data; [0110]: an image data file in a predetermined format such as the JPEG format),
Honda in view of Satoshi further discloses a prefix of the identification name of the target image, and a suffix of the identification name of the target image excluding an extension and a delimiter of the identification name of the target image, the accompanying information accompanying the target image, information regarding a manufacturer's note area included in the accompanying information of the target image, predetermined information embedded in an image data section of the target image (Satoshi; Fig. 4; [0067-0068]: identify the type of each thumbnail image and an offset indicating the position; Fig. 6; [0078]: the 3D image file and the thumbnail image file;
PNG
media_image7.png
288
548
media_image7.png
Greyscale
; Fig. 5A-5D; [0072]; [0084]: the extension of the 3D image file is jpg, and the extension of the thumbnail image file is th3; Fig. 13C; [0122]: the header, the first thumbnail image, and a 3D image corresponding to the viewpoint 1 are recorded in the format in conformity with the current Exif standard),
Honda in view of Satoshi fails to explicitly disclose:
information regarding a watermark area of the target image, and information regarding an alpha channel area of the target image.
In same field of endeavor, Li teaches:
information regarding a watermark area of the target image, and information regarding an alpha channel area of the target image ([0026]: annotation 130 includes text; annotations include text, symbols, indicia, a time/date stamp, page number(s), phone number(s), address(es), digital signature(s), watermark(s), confidential or propriety statement(s), legal disclaimer(s), image(s), etc, or combinations thereof; Fig. 2; [0031]: the opaque annotation object 220 generally includes pixels corresponding to an annotation 230 and background 230; the opaque annotation object 220 includes information regarding an alpha channel area of the target image;
PNG
media_image8.png
334
434
media_image8.png
Greyscale
; [0040]: the electronic document management module 420 generates and displays a plurality of "thumbnails", e.g., smaller images).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Honda in view of Satoshi to include information regarding a watermark area of the target image, and information regarding an alpha channel area of the target image as taught by Li. The motivation for doing so would have been to superimpose watermarks; to superimpose the opaque annotation object 220; to generate and display a plurality of thumbnails, e.g., smaller images as taught by Li in paragraphs [0026], [0031], and [0040].
Allowable Subject Matter
Claims 6-8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAI TAO SUN/Primary Examiner, Art Unit 2616