DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Specification
2. The disclosure is objected to because of the following informalities:
Specification, page 3, paragraph [0008], lines 12-13: "… the human machine interface
and the human machine interface to determine the remaining displayable areas". It does not clear about limitation “the human machine interface and the human machine interface”.
Specification, page 4, paragraph [0009], line 14: "… the human machine interface
and the human machine interface to determine the remaining displayable areas". It does not clear about limitation “the human machine interface and the human machine interface”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1-2, 4-7, 9-12, 14-17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Yi Gao, (machine translation of CN-104238911-A with citation below, hereinafter “Gao”) in view of Yusuke Yasukawa, (machine translation of JP-2016177428-A with citation below, hereinafter “Yasukawa”).
Regarding claim 1, Gao discloses a method of displaying an application icon of an application program (Gao- ¶0009-0012, at least disclose a load icon display method […] when target display content is loaded, identifying a characteristic area of current display content in a display area corresponding to the target display content; determining the display position of the loading icon in the display area according to the characteristic area; and displaying the loading icon according to the display position), comprising:
in response to an activation of the application program, determining whether any important information or any human machine interface appears in a preset display region on a display interface (Gao- ¶0010-0011, at least disclose when target display content is loaded, identifying a characteristic area of current display content in a display area corresponding to the target display content; determining the display position of the loading icon in the display area according to the characteristic area; ¶0017, at least discloses identifying a target object in the current display content, and determining the area where the target object is located as the first characteristic area; ¶0126, at least discloses taking the terminal as an example to load the original image of the thumbnail, when the terminal receives the click signal acting on the thumbnail and then starts to load the original image of the thumbnail, the terminal may identify the target object in the enlarged thumbnail and determine the area where the target object is located as the first feature area [determining whether any important information or any human machine interface appears in a preset display region on a display interface]);
a target display region where neither important information nor human machine interface appears is determined in the display interface (Gao- ¶0014-0015, at least disclose when only a first characteristic region is identified, determining the display position of the loading icon in the region except the first characteristic region in the display region, wherein the first characteristic region is a region which is interested in prediction; or when only a second characteristic region is identified, determining the display position of the loading icon in the second characteristic region, wherein the second characteristic region is a region which is not interested in prediction [neither important information nor human machine interface appears is determined in the display interface]; ¶0021-0022, at least disclose identifying a target object in the current display content, and determining an area where the target object is located as the first characteristic area; identifying a solid color region in a region of the display region other than the first feature region, and determining the solid color region as the second feature region); and
displaying the application icon of the application program in the target display region (Gao- Fig. 2A and ¶0143-0144, at least disclose the loading icon may also be a triangle, a rectangle, or an icon with any other shape […] In step 203, a load icon is displayed according to the display position).
Gao does not explicitly disclose in response to the important information or the human machine interface being determined to appear in the preset display region, determining whether any important information or any human machine interface appears in a shifted display region on the display interface, and repeating the process of determining a presence of any important information or any human machine interface in the shifted display region until a target display region where neither important information nor human machine interface appears is determined in the display interface.
However, Yasukawa discloses
in response to the important information or the human machine interface being determined to appear in the preset display region, determining whether any important information or any human machine interface appears in a shifted display region on the display interface, and repeating the process of determining a presence of any important information or any human machine interface in the shifted display region (Yasukawa- ¶0007-0008, at least disclose When there are a plurality of areas in the direction, a determination process for determining an order for each of the plurality of areas and, based on the order, the starting point of the plurality of areas existing in the direction from the starting point is Processing for selecting a candidate area as a storage destination of the display object to be pointed […] a process of detecting a user operation indicating a starting point indicating a display object and a direction from the starting point on a display screen including a plurality of regions for storing display objects, and the above starting point. When there are a plurality of areas in the direction, the starting point indicates the process of determining the order for each of the plurality of areas and the plurality of areas existing in the direction from the starting point based on the order. When the process of selecting the storage destination area of the display object and the user operation that is the same as or similar to the user operation are detected again, the selection of the storage destination area is canceled and the storage destination is set based on the above order. And processing for selecting the next area; Fig. 9 and ¶0072, at least disclose In step S901, the determination unit 315 determines the order in which the plurality of desktop areas 201 that intersect with the extension line are selected in descending order of the counter value […] it is possible to give priority to the selection of the storage destination that is frequently used. For example, when repeatedly copying to the same desktop area 201, the operation becomes simpler).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Gao to incorporate the teachings of Yasukawa, and determining an order for each of the plurality of areas and processing for selecting the next area into Gao’s teachings in order in response to the important information or the human machine interface being determined to appear in the preset display region, determining whether any important information or any human machine interface appears in a shifted display region on the display interface, and repeating the process of determining a presence of any important information or any human machine interface in the shifted display region until a target display region where neither important information nor human machine interface appears is determined in the display interface.
Doing so would simplify an operation of moving a display object in a display screen.
Regarding claim 5, Gao in view of Yasukawa, discloses the method according to claim 1, and further discloses wherein after determining whether any important information or any human machine interface appears in the preset display region on the display interface (see Claim 1 rejection for detailed analysis), the method further comprises:
in response to neither any important information nor any human machine interface being determined to appear in the preset display region, determining the preset display region to be a target display region (Gao- ¶0014-0015, at least disclose when only a first characteristic region is identified, determining the display position of the loading icon in the region except the first characteristic region in the display region, wherein the first characteristic region is a region which is interested in prediction; or when only a second characteristic region is identified, determining the display position of the loading icon in the second characteristic region, wherein the second characteristic region is a region which is not interested in prediction [neither important information nor human machine interface appears is determined in the display interface]; ¶0021-0022, at least disclose identifying a target object in the current display content, and determining an area where the target object is located as the first characteristic area; identifying a solid color region in a region of the display region other than the first feature region, and determining the solid color region as the second feature region; ¶0087, at least discloses when target display content is loaded, identifying a characteristic area of current display content in a display area corresponding to the target display content).
Regarding claim 6, Gao in view of Yasukawa, discloses the method according to claim 1, and further discloses wherein before determining whether any important information or any human machine interface appears in the preset display region on the display interface (see Claim 1 rejection for detailed analysis), the method further comprises:
obtaining the shape information of the application icon (Gao- ¶0041-0042, at least disclose when the number of the second characteristic areas is two or more, acquiring the shape and/or size of the loading icon […] for each second characteristic region, calculating the matching degree of the loading icon and the second characteristic region according to the shape of the loading icon and the shape of the second characteristic region, and/or the size of the loading icon and the size of the second characteristic region), the shape information including at least one icon length of the application icon in at least one shift direction (Gao- ¶0041-0042, at least disclose when the number of the second characteristic areas is two or more, acquiring the shape and/or size of the loading icon […] for each second characteristic region, calculating the matching degree of the loading icon and the second characteristic region according to the shape of the loading icon and the shape of the second characteristic region, and/or the size of the loading icon and the size of the second characteristic region; Yasukawa- Fig. 2 and ¶0018, at least disclose The flick start point 207 indicates the start point in the flick operation when the user who is seated in front of the desktop area 201 a tries to move the icon 203 a to another desktop area 201. A flick direction 209 indicates a direction in the flick operation.); and
in response to some important information or some human machine interface being determined to appear in the preset display region (Gao- ¶0009-0012, at least disclose a load icon display method […] when target display content is loaded, identifying a characteristic area of current display content in a display area corresponding to the target display content; determining the display position of the loading icon in the display area according to the characteristic area; and displaying the loading icon according to the display position), shifting the preset display region in the at least one shift direction by a shift distance to determine a next display region, the shift distance being greater than one half of the icon length (Yasukawa- Fig. 2 shows move the icon 203 a to another desktop area 201 a to another desktop area 201 (corresponding to shifting the preset display region in the at least one shift direction by a shift distance to determine a next display region). The shift distance being greater than one half of the length of the icon 203a ; ¶0018, at least disclose The flick start point 207 indicates the start point in the flick operation when the user who is seated in front of the desktop area 201 a tries to move the icon 203 a to another desktop area 201. A flick direction 209 indicates a direction in the flick operation).
Regarding claim 7, Gao in view of Yasukawa, discloses the method according to claim 6, and further discloses wherein:
obtaining the shape information of the application icon includes obtaining a first icon length of the application icon in a first shift direction and a second icon length of the application icon in a second shift direction (Gao- ¶0041-0042, at least disclose when the number of the second characteristic areas is two or more, acquiring the shape and/or size of the loading icon […] for each second characteristic region, calculating the matching degree of the loading icon and the second characteristic region according to the shape of the loading icon and the shape of the second characteristic region, and/or the size of the loading icon and the size of the second characteristic region; Yasukawa- Fig. 2 and ¶0018, at least disclose The flick start point 207 indicates the start point in the flick operation when the user who is seated in front of the desktop area 201 a tries to move the icon 203 a to another desktop area 201. A flick direction 209 indicates a direction in the flick operation; Fig. 10 and ¶0047, at least disclose FIG. 10 shows a state where the copy destination of the icon 203a is provisionally determined in the desktop area 201b. In the desktop area 201b, an icon 1001 is displayed in such a manner that it is identified as being in a provisional state. Further, an arrow 1003 may be displayed from the flick start point 207 toward the icon 1001.); and
shifting the preset display region in the at least one shift direction by the shift distance to determine the next display region includes shifting the preset display region in the first shift direction by a first shift distance to determine a first candidate region and shifting the preset display region in the second shift direction by a second shift distance to determine a second candidate region (Yasukawa- Fig. 2 shows move the icon 203 a to another desktop area 201 a to another desktop area 201 (corresponding to shifting the preset display region in the at least one shift direction by the shift distance to determine the next display region); ¶0018, at least disclose The flick start point 207 indicates the start point in the flick operation when the user who is seated in front of the desktop area 201 a tries to move the icon 203 a to another desktop area 201. A flick direction 209 indicates a direction in the flick operation; Fig. 10 and ¶0047, at least disclose FIG. 10 shows a state where the copy destination of the icon 203a is provisionally determined in the desktop area 201b. In the desktop area 201b, an icon 1001 is displayed in such a manner that it is identified as being in a provisional state. Further, an arrow 1003 may be displayed from the flick start point 207 toward the icon 1001).
The device of claims 10 and 15-17 are similar in scope to the functions performed by the method of claims 1 and 5-7 and therefore claims 10 and 15-17 are rejected under the same rationale.
Regarding claim 10, Gao in view of Yasukawa, discloses a device of displaying an application icon of an application program (Gao- Figs. 5-7 show an apparatus for displaying a load icon; ¶0252, at least discloses an apparatus 700 for displaying a load icon […] he apparatus 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like), comprising a memory storing computer instructions and a processor coupled to the memory (Gao- Figs. 5-7; ¶0253-0255, at least disclose apparatus 700 may include one or more of the following components: a processing component 702, a memory 704 […] The processing component 702 may include one or more processors 718 to execute instructions to perform all or a portion of the steps of the methods […] The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700), wherein the processor is configured to execute the computer instructions to perform the method of claim 1.
Regarding claim 11, Gao in view of Yasukawa, discloses a computer-readable storage medium storing computer instructions, wherein when being executed by a processor (Gao- Fig.7 and ¶0255, at least disclose The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks), the computer instructions cause the processor to perform the method of claim 1.
5. Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Gao, in view of Yasukawa, further in view of Otsuka, (“Otsuka”) [US-2017/0249299-A1]
Regarding claim 2, Gao in view of Yasukawa, discloses the method according to claim 1, and further discloses wherein determining whether any important information or any human machine interface appears in the preset display region on the display interface (see Claim 1 rejection for detailed analysis) comprises:
The prior art does not explicitly disclose performing a text recognition process on the preset display region to determine whether any text information exists in the preset display region; in response to no text information being determined to exist in the preset display region, performing an image recognition process on the preset display region to determine an importance of an image of the preset display region; in response to some text information being determined to exist in the preset display region or the image of the preset display region being determined to be of high importance, determining that important information appears in the preset display region; and in response to no text information being determined to exist in the preset display region and the image of the preset display region being determined to be of low importance, determining that no important information appears in the preset display region.
However, Otsuka discloses
performing a text recognition process on the preset display region to determine whether any text information exists in the preset display region (Otsuka- ¶0031, at least discloses The text recognizer 102 recognizes the text included in each of the text regions extracted by the text region extractor 101 by using, for example, an optical character recognition (OCR) technique, so as to generate text information. The text recognizer 102 also registers the generated text information in the text region information 111 as the original text);
in response to no text information being determined to exist in the preset display region, performing an image recognition process on the preset display region to determine an importance of an image of the preset display region (Otsuka- ¶0031, at least discloses The text recognizer 102 recognizes the text included in each of the text regions extracted by the text region extractor 101 by using, for example, an optical character recognition (OCR) technique, so as to generate text information. The text recognizer 102 also registers the generated text information in the text region information 111 as the original text);
in response to some text information being determined to exist in the preset display region or the image of the preset display region being determined to be of high importance, determining that important information appears in the preset display region (Otsuka- ¶0030-0031, at least disclose If text is in included in the image information received by the document receiver 100, the text region extractor 101 extracts regions where text items are disposed, as text regions. The text region extractor 101 registers the coordinates, height, and width of each of the extracted text regions in text region information 111 of the storage unit 11 […] The text recognizer 102 recognizes the text included in each of the text regions extracted by the text region extractor 101 by using, for example, an optical character recognition (OCR) technique, so as to generate text information); and
in response to no text information being determined to exist in the preset display region and the image of the preset display region being determined to be of low importance, determining that no important information appears in the preset display region (Otsuka- ¶0031, at least discloses The text recognizer 102 also registers the generated text information in the text region information 111 as the original text).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Gao/Yasukawa to incorporate the teachings of Otsuka, and apply optical character recognition (OCR) technique into Gao/Yasukawa’s teachings for performing a text recognition process on the preset display region to determine whether any text information exists in the preset display region; in response to no text information being determined to exist in the preset display region, performing an image recognition process on the preset display region to determine an importance of an image of the preset display region; in response to some text information being determined to exist in the preset display region or the image of the preset display region being determined to be of high importance, determining that important information appears in the preset display region.
Doing so would enhance the process of performing character recognition on the text included in the text regions; and editing the text regions in accordance with the content of a received operation.
The device of claim 12 is similar in scope to the functions performed by the method of claim 2 and therefore claim 12 is rejected under the same rationale.
6. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Gao, in view of Yasukawa, further in view of Robbin et al., (“Robbin”) [US-20150346919-A1]
Regarding claim 4, Gao in view of Yasukawa, discloses the method according to claim 1, and further discloses wherein determining whether any important information or any human machine interface appears in the preset display region on the display interface (see Claim 1 rejection for detailed analysis), and the method comprises:
obtaining position information and shape information of the operable icons ;
determining whether any human machine interface appears in the preset display region (Gao- ¶0010-0011, at least disclose when target display content is loaded, identifying a characteristic area of current display content in a display area corresponding to the target display content; determining the display position of the loading icon in the display area according to the characteristic area; ¶0017, at least discloses identifying a target object in the current display content, and determining the area where the target object is located as the first characteristic area; ¶0126, at least discloses taking the terminal as an example to load the original image of the thumbnail, when the terminal receives the click signal acting on the thumbnail and then starts to load the original image of the thumbnail, the terminal may identify the target object in the enlarged thumbnail and determine the area where the target object is located as the first feature area [determining whether any human machine interface appears in the preset display region]).
The prior art does not explicitly disclose traversing a view tree of the display interface to determine all operable leaf nodes in the display interface; obtaining position information and shape information of the operable leaf nodes; and based on the position information and the shape information of the operable leaf nodes, determining whether any human machine interface corresponding to the operable leaf nodes appears in the preset display region.
However, Robbin discloses
traversing a view tree of the display interface to determine all operable leaf nodes in the display interface (Robbin- Figs. 11A-11E and ¶0200, at least disclose If the child node is a leaf node of the category tree, the device 100 displays a transition to a leaf-node user interface. FIGS. 11A-E illustrate a transition to navigate from the user interface 830 to a user interface associated with a leaf node; ¶0205-0206, at least disclose In FIG. 11E, the user interface 1130 is a leaf-node user interface, or an interface associated with the lowest level in the category hierarchy […] The header region 1132 identifies the node of the category tree associated with the leaf-node user interface 1130);
obtaining position information and shape information of the operable leaf nodes; and based on the position information and the shape information of the operable leaf nodes, determining whether any human machine interface corresponding to the operable leaf nodes appears in the preset display region (Robbin- ¶0167, at least discloses the location of a focus selector (e.g., a cursor, a contact or a selection box) over a respective button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button; Figs. 11A-11E and ¶0200, at least disclose If the child node is a leaf node of the category tree, the device 100 displays a transition to a leaf-node user interface. FIGS. 11A-E illustrate a transition to navigate from the user interface 830 to a user interface associated with a leaf node; ¶0205-0206, at least disclose In FIG. 11E, the user interface 1130 is a leaf-node user interface, or an interface associated with the lowest level in the category hierarchy […] The header region 1132 identifies the node of the category tree associated with the leaf-node user interface 1130).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Gao/Yasukawa to incorporate the teachings of Robbin, and apply a user interface associated with the leaf nodes into Gao/Yasukawa’s teachings for traversing a view tree of the display interface to determine all operable leaf nodes in the display interface; obtaining position information and shape information of the operable leaf nodes; and based on the position information and the shape information of the operable leaf nodes, determining whether any human machine interface corresponding to the operable leaf nodes appears in the preset display region.
Doing so would provide for logical navigation of the content hierarchy and illustrate a relationship between content items and the rest of the hierarchy to provide users with context for the user interfaces.
The device of claim 14 is similar in scope to the functions performed by the method of claim 4 and therefore claim 14 is rejected under the same rationale.
7. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gao, in view of Yasukawa, further in view of Han et al., (“Han”) [US-2013/0173270-A1]
Regarding claim 9, Gao in view of Yasukawa, discloses the method according to claim 1, and the prior art does not explicitly disclose, but Han discloses wherein:
the application program is a voice control program (Han- ¶0012, at least discloses an electronic apparatus which provides different voice task modes according to a voice command and which displays different voice guide information corresponding to each of the different voice task modes; ¶0014, at least discloses receiving a voice input; and if the voice input is a first voice command, changing a mode of the electronic apparatus to a first voice task mode in which the electronic apparatus received further voice input, and if the voice input is a second voice command, changing the mode of the electronic apparatus to a second voice task mode in which said further voice input is received via an external apparatus which operates with the electronic apparatus; ¶0020, at least discloses The first voice command may be a preset word, and the second voice command may be said further voice input after receiving input indicating that a preset button of the external apparatus was pressed), and the application icon is a voice icon of the voice control program (Han- ¶0099, at least discloses the plurality of voice items of the first voice guide information may include voice items, which may execute a function of the electronic apparatus 100 if a voice of a user is input, such as a channel up/down voice icon, a volume up/down voice icon, and a mute voice icon).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Gao/Yasukawa to incorporate the teachings of Han, and apply the voice command into Gao/Yasukawa’s teachings in order the application program is a voice control program, and the application icon is a voice icon of the voice control program.
Doing so would provide efficiency and flexibility in controlling the electronic apparatus by using a microphone of the electronic apparatus or a microphone of the external apparatus.
The device of claim 19 is similar in scope to the functions performed by the method of claim 9 and therefore claim 19 is rejected under the same rationale.
Allowable Subject Matter
8. Claims 3, 8, 13 and 18 are objected to as being dependent upon a rejected base
claim, but would be allowable if rewritten in independent form including all of the
limitations of the base claim and any intervening claims.
9. The following is a statement of reasons for the indication of allowable subject
matter:
Regarding Claim 3, the combination of prior arts teaches the method of Claim 1.
However in the context of claim 1, 2 and 3 as a whole, the combination of prior arts does
not teach collecting statistics on color of each pixel in the image of the preset display region; and based on a number of colors contained in the image and/or color difference between adjacent pixels in the image, determining the importance of the image of the preset display region. Therefore, Claim 3 in the context of claim 1, 2 as a whole does comprise allowable subject matter.
Regarding Claim 13, the combination of prior arts teaches the method of Claim 10.
However in the context of claim 10, 12 and 13 as a whole, the combination of prior arts does
not teach collect statistics on color of each pixel in the image of the preset display region; and
based on a number of colors contained in the image and/or color difference between adjacent pixels in the image, determine the importance of the image of the preset display region. Therefore, Claim 13 in the context of claim 10, 12 as a whole does comprise allowable subject matter.
Regarding Claim 8, the combination of prior arts teaches the method of Claim 1.
However in the context of claim 1, 6 and 7 as a whole, the combination of prior arts does
not teach in response to some important information or some human machine interface being determined to appear in any one of the first candidate region and the second candidate region, shifting the corresponding candidate region in the first shift direction by the first shift distance to determine a third candidate region, shifting the corresponding candidate region in the second shift direction by the second shift distance to determine a fourth candidate region, and determining whether any important information or any human machine interface appears in the third candidate region and the fourth candidate region, respectively, until none of the candidate regions includes any important information or any human machine interface, or an end of the display interface is reached in an up-down direction or a left-right direction; based on the presence of any important information or any human machine interface in any of the candidate regions, scanning each candidate region to determine at least one displayable region; and determining the at least one displayable region closest to the preset display region to be the target display region. Therefore, Claim 8 in the context of claim 1, 6, 7 as a whole does comprise allowable subject matter.
Regarding Claim 18, the combination of prior arts teaches the method of Claim 10.
However in the context of claim 10, 16 and 17 as a whole, the combination of prior arts does
not teach in response to some important information or some human machine interface being determined to appear in any one of the first candidate region and the second candidate region, shift the corresponding candidate region in the first shift direction by the first shift distance to determine a third candidate region, shifting the corresponding candidate region in the second shift direction by the second shift distance to determine a fourth candidate region, and determine whether any important information or any human machine interface appears in the third candidate region and the fourth candidate region, respectively, until none of the candidate regions includes any important information or any human machine interface, or an end of the display interface is reached in an up-down direction or a left-right direction; based on the presence of any important information or any human machine interface in any of the candidate regions, scan each candidate region to determine at least one displayable region; and determine the at least one displayable region closest to the preset display region to be the target display region. Therefore, Claim 18 in the context of claim 10, 16, 17 as a whole does comprise allowable subject matter.
Conclusion
10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. They are as recited in the attached PTO-892 form.
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL LE/Primary Examiner, Art Unit 2614